scispace - formally typeset
C

Chun Xia

Researcher at University of Illinois at Urbana–Champaign

Publications -  17
Citations -  278

Chun Xia is an academic researcher from University of Illinois at Urbana–Champaign. The author has contributed to research in topics: Computer science & Cache pollution. The author has an hindex of 4, co-authored 6 publications receiving 158 citations.

Papers
More filters
Proceedings ArticleDOI

Optimizing instruction cache performance for operating system intensive workloads

TL;DR: This paper characterizes in detail the locality patterns of the operating system code and shows that there is substantial locality, and proposes an algorithm to expose these localities and reduce interference.
Proceedings ArticleDOI

Instruction Prefetching of Systems Codes with Layout Optimized for Reduced Cache Misses

TL;DR: For 16-Kbyte primary instruction caches, guarded sequential prefetching removes, on average, 66% of the instruction misses remaining in an operating system with an optimized layout, speeding up the operating system by 10%.
Journal ArticleDOI

Optimizing the instruction cache performance of the operating system

TL;DR: This paper characterizes, in detail, the locality patterns of the operating system code and shows that there is substantial locality, and proposes an algorithm to expose these localities and reduce interference in the cache.
Proceedings ArticleDOI

Less training, more repairing please: revisiting automated program repair via zero-shot learning

Chun Xia, +1 more
TL;DR: This paper proposes AlphaRepair, the first cloze-style APR approach to directly leveraging large pre-trained code models for APR without any fine-tuning/retraining on historical bug fixes, and implementsAlphaRepair as a practical multilingual APR tool based on the recent CodeBERT model.
Journal ArticleDOI

Conversational Automated Program Repair

Chun Xia, +1 more
- 30 Jan 2023 - 
TL;DR: This article proposed conversational APR, a new paradigm for program repair that alternates between patch generation and validation in a conversational manner, which leverages the long-term context window of LLMs to not only avoid generating previously incorrect patches but also incorporate validation feedback to help the model understand the semantic meaning of the program under test.