scispace - formally typeset
Search or ask a question

Showing papers by "Per Stenström published in 2023"


Journal ArticleDOI
TL;DR: Approx-RM as discussed by the authors proposes a resource management scheme that reduces energy expenditure while guaranteeing a specified performance as well as accuracy target, which can reduce the required iteration count and thereby save significant amounts of energy.
Abstract: Reducing energy consumption while providing performance and quality guarantees is crucial for computing systems ranging from battery-powered embedded systems to data centers. This paper considers approximate iterative applications executing on heterogeneous multi-core platforms under user-specified performance and quality targets. We note that allowing a slight yet bounded relaxation in solution quality can considerably reduce the required iteration count and thereby can save significant amounts of energy. To this end, this paper proposes Approx-RM, a resource management scheme that reduces energy expenditure while guaranteeing a specified performance as well as accuracy target. Approx-RM predicts the number of iterations required to meet the relaxed accuracy target at run-time. The time saved generates execution-time slack, which allows Approx-RM to allocate fewer resources on a heterogeneous multi-core platform in terms of DVFS, core type, and core count to save energy while meeting the performance target. Approx-RM contributes with lightweight methods for predicting the iteration count needed to meet the accuracy target and the resources needed to meet the performance target. Approx-RM uses the aforementioned predictions to allocate just enough resources to comply with quality of service constraints to save energy. Our evaluation shows energy savings of 31.6%, on average, compared to Race-to-idle when the accuracy is only relaxed by 1%. Approx-RM incurs timing and energy overheads of less than 0.1%.

DOI
01 May 2023
TL;DR: SCALE as mentioned in this paper is a secure cache allocation policy and enforcement mechanism to protect the cache against timing-based side-channel attacks, in which non-determinism is introduced into the allocation policy decisions by adding noise to prevent the adversary from observing predictable changes in allocation and thereby infer secrets.
Abstract: Dynamically partitioned last-level caches enhance performance while also introducing security vulnerabilities. We show how cache allocation policies can act as a side-channel and be exploited to launch attacks and obtain sensitive information. Our analysis reveals that information leaks due to predictable changes in cache allocation for the victim, that is caused and/or observed by the adversary, leads to exploits We propose SCALE, a secure cache allocation policy and enforcement mechanism, to protect the cache against timing-based side-channel attacks. SCALE uses randomness, in a novel way, to enable dynamic and scalable partitioning while protecting against cache allocation policy side-channel attacks Non-determinism is introduced into the allocation policy decisions by adding noise, which prevents the adversary from observing predictable changes in allocation and thereby infer secrets. We leverage differential privacy (DP), and show that SCALE can provide quantifiable and information theoretic security guarantees. SCALE outperforms state-of-the-art secure cache solutions, on a 16-core tiled chip multi-processor (CMP) with multi-programmed workloads, and improves performance up to 39%and by 14%, on average.