scispace - formally typeset
M

Moinuddin K. Qureshi

Researcher at Georgia Institute of Technology

Publications -  144
Citations -  11625

Moinuddin K. Qureshi is an academic researcher from Georgia Institute of Technology. The author has contributed to research in topics: Cache & Computer science. The author has an hindex of 44, co-authored 131 publications receiving 9956 citations. Previous affiliations of Moinuddin K. Qureshi include IBM & University of Texas at Austin.

Papers
More filters
Proceedings ArticleDOI

Embedded tutorial - Emerging memory technologies: What it means for computer system designers

TL;DR: This talk proposes Start-Gap, a simple and effective wear-leveling technique that incurs an overhead of only few bytes and still provides lifetime close to ideal wear leveling, and discusses the performance impact of high write latency, as most of the emerging memory technologies tend to have write latency much higher than the read latency.
Posted Content

Enabling Inference Privacy with Adaptive Noise Injection.

TL;DR: In this paper, the authors proposed Adaptive Noise Injection (ANI), which uses a light-weight DNN on the client-side to inject noise to each input, before transmitting it to the service provider to perform inference.
Proceedings ArticleDOI

Astrea: Accurate Quantum Error-Decoding via Practical Minimum-Weight Perfect-Matching

TL;DR: Astrea-G as discussed by the authors is the first real-time MWPM decoder that performs a brute-force search for the few hundred possible options to perform accurate decoding within a few nanoseconds (1ns average, 456ns worst-case).
Posted Content

To Update or Not To Update?: Bandwidth-Efficient Intelligent Replacement Policies for DRAM Caches.

TL;DR: In this article, RRIP Age-On-Bypass (RRIP-AOB) and Efficient Tracking of Reuse (ETR) are proposed to improve the hit-rate of DRAM caches.
Posted Content

Gradient Inversion Attack: Leaking Private Labels in Two-Party Split Learning.

TL;DR: Gradient Inversion Attack (GIA) as discussed by the authors is a label leakage attack that allows an adversarial input owner to learn the label owner's private labels by exploiting the gradient information obtained during split learning.