scispace - formally typeset
Search or ask a question

Showing papers by "Jean-Luc Gaudiot published in 2019"


Proceedings ArticleDOI
15 Jul 2019
TL;DR: By monitoring deviations in microarchitectural events such as cache misses, branch mispredictions from existing CPU performance counters, hardware-level attacks such as Rowhammer and Spectre can be efficiently detected during runtime with promising accuracy and reasonable performance overhead using various machine learning classifiers.
Abstract: Over the past decades, the major objectives of computer design have been to improve performance and to reduce cost, energy consumption, and size, while security has remained a secondary concern. Meanwhile, malicious attacks have rapidly grown as the number of Internet-connected devices, ranging from personal smart embedded systems to large cloud servers, have been increasing. Traditional antivirus software cannot keep up with the increasing incidence of these attacks, especially for exploits targeting hardware design vulnerabilities. For example, as DRAM process technology scales down, it becomes easier for DRAM cells to electrically interact with each other. For instance, in Rowhammer attacks, it is possible to corrupt data in nearby rows by reading the same row in DRAM. As Rowhammer exploits a computer hardware weakness, no software patch can completely fix the problem. Similarly, there is no efficient software mitigation to the recently reported attack Spectre. The attack exploits microarchitectural design vulnerabilities to leak protected data through side channels. In general, completely fixing hardware-level vulnerabilities would require a redesign of the hardware which cannot be backported. In this paper, we demonstrate that by monitoring deviations in microarchitectural events such as cache misses, branch mispredictions from existing CPU performance counters, hardware-level attacks such as Rowhammer and Spectre can be efficiently detected during runtime with promising accuracy and reasonable performance overhead using various machine learning classifiers.

30 citations


Proceedings ArticleDOI
01 Aug 2019
TL;DR: A cache memory design called the Data Shepherding Cache is proposed for larger last level caches that could achieve reasonable performance from a smaller area footage over the same sized set associative cache.
Abstract: Newer chips include cache memories as large as 128 MB to sustain the bandwidth for the GPGPU module. As 128 MB was a reasonable main memory size a decade ago, we examine the design impact of a larger granularity in the management of caches. We thus propose a cache memory design called the Data Shepherding Cache for larger last level caches. Even with a granularity as large as a page for the management of the last level cache, our Data Shepherding Cache could achieve reasonable performance from a smaller area footage over the same sized set associative cache.

2 citations