scispace - formally typeset
M

Mikko H. Lipasti

Researcher at University of Wisconsin-Madison

Publications -  162
Citations -  6213

Mikko H. Lipasti is an academic researcher from University of Wisconsin-Madison. The author has contributed to research in topics: Cache & Cache coherence. The author has an hindex of 41, co-authored 156 publications receiving 6041 citations. Previous affiliations of Mikko H. Lipasti include Carnegie Mellon University & Wisconsin Alumni Research Foundation.

Papers
More filters
Proceedings ArticleDOI

Value locality and load value prediction

TL;DR: This paper introduces the notion of value locality, a third facet of locality that is frequently present in real-world programs, and describes how to effectively capture and exploit it in order to perform load value prediction.
Proceedings ArticleDOI

Exceeding the dataflow limit via value prediction

TL;DR: It is shown that simple microarchitectural enhancements to a modern microprocessor implementation based on the PowerPC 620 that enable value prediction can effectively exploit value locality to collapse true dependences, reduce average result latency and provide performance gains of 4.5%-23% by exceeding the dataflow limit.
Book

Modern Processor Design: Fundamentals of Superscalar Processors

TL;DR: This book brings together the numerous microarchitectural techniques for harvesting more instruction-level parallelism (ILP) to achieve better processor performance that have been proposed and implemented in real machines.
Journal ArticleDOI

Virtual Circuit Tree Multicasting: A Case for On-Chip Hardware Multicast Support

TL;DR: The proposed Virtual Circuit Tree Multicasting (VCTM) router is flexible enough to improve interconnect performance for a broad spectrum of multicasting scenarios, and achieves these benefits with straightforward and inexpensive extensions to a state-of-the-art packet-switched router.
Proceedings ArticleDOI

Achieving predictable performance through better memory controller placement in many-core CMPs

TL;DR: This paper shows how the location of the memory controllers can reduce contention (hot spots) in the on-chip fabric and lower the variance in reference latency, which provides predictable performance for memory-intensive applications regardless of the processing core on which a thread is scheduled.