scispace - formally typeset
W

Wei Chen

Researcher at Microsoft

Publications -  226
Citations -  14625

Wei Chen is an academic researcher from Microsoft. The author has contributed to research in topics: Maximization & Greedy algorithm. The author has an hindex of 47, co-authored 226 publications receiving 12843 citations. Previous affiliations of Wei Chen include University of British Columbia & Stony Brook University.

Papers
More filters
Journal ArticleDOI

Complete submodularity characterization in the comparative independent cascade model

TL;DR: A full characterization for submodularity in the comparative independent cascade (Com-IC) model of two-idea cascade is given, for competing ideas and complementary ideas respectively, with or without reconsideration.

On Failure Detectors Weaker Than Ever

TL;DR: This paper proposes a series of partitioned failure detectors that can solve n-set agreement, yet they are strictly weaker than [8], the weakest failure detector ever found before the authors' work to circumvent any asynchronous impossible problems in the shared-memory model.
Posted Content

Capturing Complementarity in Set Functions by Going Beyond Submodularity/Subadditivity

TL;DR: In this paper, two new "degree of complementarity" measures, referred to as supermodular width and superadditive width, were introduced to characterize the gap of monotone set functions from being submodular and subadditive.
Book ChapterDOI

Software testing process automation based on UTP – a case study

TL;DR: This paper introduces an approach that transforms design models represented by UML to testing models representation by UTP (UML Testing Profile), and further more transforms the testing models to TTCN-3 test cases that can be executed on a TTCn-3 execution engine, according to TTC N-3 mapping interface defined in UTP.
Posted Content

Ensemble-Compression: A New Method for Parallel Training of Deep Neural Networks

TL;DR: In this paper, the authors proposed to aggregate the local models by ensemble, i.e., averaging the outputs of local models instead of the parameters, and carried out model compression after each ensemble, specialized by a distillation based method.