M
Madhu Mutyam
Researcher at Indian Institute of Technology Madras
Publications - 68
Citations - 646
Madhu Mutyam is an academic researcher from Indian Institute of Technology Madras. The author has contributed to research in topics: Cache & Cache pollution. The author has an hindex of 12, co-authored 67 publications receiving 611 citations. Previous affiliations of Madhu Mutyam include International Institute of Information Technology & International Institute of Information Technology, Hyderabad.
Papers
More filters
Journal ArticleDOI
Optimization of Intercache Traffic Entanglement in Tagless Caches With Tiling Opportunities
S. R. Swamy Saranam Chongala,Sumitha George,Hariram Thirucherai Govindarajan,Jagadish B. Kotra,Madhu Mutyam,John Sampson,Mahmut Kandemir,Vijaykrishnan Narayanan +7 more
TL;DR: New replacement policies and energy-friendly mechanisms for tagless LLCs, such as restricted block caching and victim tag buffer caching, are proposed to incorporate L4 eviction costs into L3 replacement decisions efficiently and to address entanglement overheads and pathologies.
Formal Modeling and Verification of Security Properties of a Ransomware-Resistant SSD
TL;DR: Wang et al. as discussed by the authors discussed the derivation of a set of security properties related to protection from ransomware that covers the security requirements and the design; they next prove such properties using symbolic model checking.
Book ChapterDOI
Universality Results for Some Variants of P Systems
Madhu Mutyam,Kamala Krithivasan +1 more
TL;DR: In this article, the authors consider several classes of P systems with symbol-objects where the catalysts can move in and out of a membrane, and prove universality results for these variants with a very small number of membranes.
Proceedings ArticleDOI
SAMO: store aware memory optimizations
TL;DR: This paper tracks stores both at cache and main memory and applies three different optimizations, one at the cache level, so that stores are serviced faster and hence load store queue block cycles are reduced, two at the miss handling architecture wherein the authors remove entries containing only store requests thereby reducing the cache stall cycles, and three, at the main memory where stores are Serviced with lesser priority so that actual reads get Serviced faster.
Proceedings ArticleDOI
An Experimental Study on Dynamic Bank Partitioning of DRAM in Chip Multiprocessors
TL;DR: A comparative study of DRAM bank allocation algorithms between static and dynamicBank allocation algorithms on a large number of SPEC CPU 2006 benchmarks concludes that, though per application performance increases for few applications, there is no benefit in overall system performance.