scispace - formally typeset
Patent

Power efficient stack of multicore microprocessors

Reads0
Chats0
TLDR
In this article, a stack of microprocessor chips that are designed to work together in a multiprocessor system is discussed, and the hypervisor or operating system controls the utilization of individual chips of a stack.
Abstract
A computing system has a stack of microprocessor chips that are designed to work together in a multiprocessor system. The chips are interconnected with 3D through vias, or alternatively by compatible package carriers having the interconnections, while logically the chips in the stack are interconnected via specialized cache coherent interconnections. All of the chips in the stack use the same logical chip design, even though they can be easily personalized by setting specialized latches on the chips. One or more of the individual microprocessor chips utilized in the stack are implemented in a silicon process that is optimized for high performance while others are implemented in a silicon process that is optimized for power consumption i.e. for the best performance per Watt of electrical power consumed. The hypervisor or operating system controls the utilization of individual chips of a stack.

read more

Citations
More filters
Patent

Exploiting process variation in a multicore processor

TL;DR: In this article, a method for accessing characterization data indicating first and second sets of performance characteristics of a processor is described. But the characterization data is not available for the second processor.
Patent

Memory Processing Core Architecture

TL;DR: In this paper, a memory system comprising a plurality of stacked memory layers, each memory layer divided into memory sections, wherein each memory section connects to a neighboring memory section in an adjacent memory layer, and a logic layer stacked among the plurality of memory layers.
Patent

Systems, methods and devices for determining work placement on processor cores

TL;DR: In this article, an apparatus may include one or more processors, devices, and/or circuitry to determine whether to migrate a thread to or from a favored core of a processor.
Patent

Three-Dimensional Permute Unit for a Single-Instruction Multiple-Data Processor

TL;DR: In this article, a 3D permute unit for a single-instruction-multiple-data stacked processor includes a first vector permute subunit and a second vector subunit, each configured to process a portion of at least two input vectors.
Patent

Cold storage server

Zhang Bin, +1 more
TL;DR: In this article, a cold storage server consisting of a first computing module and a first group of six I/O (input/output) storage modules is described, which is based on the APM Multi-Core X-Gene series.
References
More filters
Proceedings ArticleDOI

PicoServer: using 3D stacking technology to enable a compact energy efficient chip multiprocessor

TL;DR: It is shown how 3D stacking technology can be used to implement a simple, low-power, high-performance chip multiprocessor suitable for throughput processing and that a PicoServer performs comparably to a Pentium 4-like class machine while consuming only about 1/10 of the power.
Patent

Wafer level stackable semiconductor package

TL;DR: A stackable semiconductor package includes a semiconductor die, and has a chip sized peripheral outline matching that of the die as mentioned in this paper, which can also function as edge contacts for the package.
Patent

Memory module having interconnected and stacked integrated circuits

TL;DR: A multi-chip memory module may be formed including two or more stacked integrated circuits mounted to a substrate or lead frame structure as mentioned in this paper, which can couple one or more of the integrated circuits to edge conductors in a memory card package configuration.
Patent

Stackable memory card

TL;DR: In this article, a memory card design for stackable memory cards is presented, where a first memory card is connected to the sockets of the computer system's motherboard and then stacks subsequent memory cards on top of this first card, and a presence detect serial EPROM and steer and encode logic is provided to assign a unique system address to each presence detect.
Patent

Horizontally-shared cache victims in multiple core processors

TL;DR: Cache priority rules can be based on cache coherency data, load balancing schemes, and architectural characteristics of the processor as discussed by the authors, and the processor evaluates cache priority rules to determine whether victim lines are discarded, written back to system memory, or stored in other processor core units' caches.