scispace - formally typeset
J

Jiawei Han

Researcher at University of Illinois at Urbana–Champaign

Publications -  1302
Citations -  155054

Jiawei Han is an academic researcher from University of Illinois at Urbana–Champaign. The author has contributed to research in topics: Cluster analysis & Knowledge extraction. The author has an hindex of 168, co-authored 1233 publications receiving 143427 citations. Previous affiliations of Jiawei Han include Georgia Institute of Technology & United States Army Research Laboratory.

Papers
More filters
Proceedings ArticleDOI

Assembler: Efficient Discovery of Spatial Co-evolving Patterns in Massive Geo-sensory Data

TL;DR: This paper proposes a two-stage method called Assember, which conceptually organizes all the SCPs into a novel structure called the SCP search tree, which facilitates the effective pruning of the search space to generate SCPs efficiently.
Proceedings ArticleDOI

Privacy risk in anonymized heterogeneous information networks

TL;DR: This work formally defines privacy risk in an anonymized heterogeneous information network to identify the vulnerability in the possible way such data are released, and presents a new de-anonymization attack that exploits the vulnerability.
Proceedings ArticleDOI

Distantly Supervised Biomedical Named Entity Recognition with Dictionary Expansion

TL;DR: Experimental results show that AutoBioNER achieves the best performance among the methods that only use dictionaries with no additional human effort on BioNER benchmark datasets, and it is demonstrated that the dictionary expansion step plays an important role in the great performances.
Journal ArticleDOI

MoveMine 2.0: mining object relationships from movement data

TL;DR: This work proposes to extend MoveMine to MoveMine 2.0 by adding substantial new methods in mining dynamic relationship patterns, and focuses on two types of pairwise relationship patterns: attraction/avoidance relationship, and following pattern.
Posted Content

Understanding the Difficulty of Training Transformers

TL;DR: It is revealed that for each layer in a multi-layer Transformer model, heavy dependency on its residual branch makes training unstable since it amplifies small parameter perturbations and result in significant disturbances in the model output, yet a light dependency limits the potential of model training and can lead to an inferior trained model.