scispace - formally typeset
T

Tianfei Zhou

Researcher at ETH Zurich

Publications -  67
Citations -  1723

Tianfei Zhou is an academic researcher from ETH Zurich. The author has contributed to research in topics: Computer science & Segmentation. The author has an hindex of 12, co-authored 44 publications receiving 485 citations. Previous affiliations of Tianfei Zhou include Beijing Institute of Technology.

Papers
More filters
Posted Content

Exploring Cross-Image Pixel Contrast for Semantic Segmentation

TL;DR: In this article, a pixel-wise contrastive framework is proposed to enforce pixel embeddings belonging to a same semantic class to be more similar than embedding from different classes.
Journal ArticleDOI

MATNet: Motion-Attentive Transition Network for Zero-Shot Video Object Segmentation

TL;DR: A novel end-to-end learning neural network, i.e., MATNet, for zero-shot video object segmentation (ZVOS), motivated by the human visual attention behavior, which leverages motion cues as a bottom-up signal to guide the perception of object appearance.
Book ChapterDOI

Video Object Segmentation with Episodic Graph Memory Networks

TL;DR: In this article, a graph memory network is developed to address the novel idea of "learning to update the segmentation model" by exploiting an episodic memory network to store frames as nodes and capture cross-frame correlations by edges.
Posted Content

Video Object Segmentation with Episodic Graph Memory Networks

TL;DR: This work exploits an episodic memory network, organized as a fully connected graph, to store frames as nodes and capture cross-frame correlations by edges and yields a neat yet principled framework, which can generalize well both one-shot and zero-shot video object segmentation tasks.
Journal ArticleDOI

Motion-Attentive Transition for Zero-Shot Video Object Segmentation

TL;DR: Zhang et al. as mentioned in this paper proposed an asymmetric attention block within a two-stream encoder, which transforms appearance features into motion-attentive representations at each convolutional stage.