scispace - formally typeset
Search or ask a question
Topic

Overclocking

About: Overclocking is a research topic. Over the lifetime, 216 publications have been published within this topic receiving 3548 citations. The topic is also known as: overclock & OC.


Papers
More filters
Proceedings ArticleDOI
03 Dec 2003
TL;DR: A solution by which the circuit can be operated even below the ‘critical’ voltage, so that no margins are required and thus more energy can be saved.
Abstract: With increasing clock frequencies and silicon integration, power aware computing has become a critical concern in the design of embedded processors and systems-on-chip. One of the more effective and widely used methods for power-aware computing is dynamic voltage scaling (DVS). In order to obtain the maximum power savings from DVS, it is essential to scale the supply voltage as low as possible while ensuring correct operation of the processor. The critical voltage is chosen such that under a worst-case scenario of process and environmental variations, the processor always operates correctly. However, this approach leads to a very conservative supply voltage since such a worst-case combination of different variabilities is very rare. In this paper, we propose a new approach to DVS, called Razor, based on dynamic detection and correction of circuit timing errors. The key idea of Razor is to tune the supply voltage by monitoring the error rate during circuit operation, thereby eliminating the need for voltage margins and exploiting the data dependence of circuit delay. A Razor flip-flop is introduced that double-samples pipeline stage values, once with a fast clock and again with a time-borrowing delayed clock. A metastability-tolerant comparator then validates latch values sampled with the fast clock. In the event of timing error, a modified pipeline mispeculation recovery mechanism restores correct program state. A prototype Razor pipeline was designed in a 0.18 /spl mu/m technology and was analyzed. Razor energy overhead during normal operation is limited to 3.1%. Analyses of a full-custom multiplier and a SPICE-level Kogge-Stone adder model reveal that substantial energy savings are possible for these devices (up to 64.2%) with little impact on performance due to error recovery (less than 3%).

1,137 citations

Proceedings ArticleDOI
10 Apr 2011
TL;DR: The first large-scale analysis of hardware failure rates on a million consumer PCs finds that CPU fault rates are correlated with the number of cycles executed, underclocked machines are significantly more reliable than machines running at their rated speed, and laptops are more reliability than desktops.
Abstract: We present the first large-scale analysis of hardware failure rates on a million consumer PCs. We find that many failures are neither transient nor independent. Instead, a large portion of hardware induced failures are recurrent: a machine that crashes from a fault in hardware is up to two orders of magnitude more likely to crash a second time. For example, machines with at least 30 days of accumulated CPU time over an 8 month period had a 1 in 190 chance of crashing due to a CPU subsystem fault. Further, machines that crashed once had a probability of 1 in 3.3 of crashing a second time. Our study examines failures due to faults within the CPU, DRAM and disk subsystems. Our analysis spans desktops and laptops, CPU vendor, overclocking, underclocking, generic vs. brand name, and characteristics such as machine speed and calendar age. Among our many results, we find that CPU fault rates are correlated with the number of cycles executed, underclocked machines are significantly more reliable than machines running at their rated speed, and laptops are more reliable than desktops.

146 citations

Proceedings ArticleDOI
20 Apr 2009
TL;DR: An extension of the well-known Controller Area Network (CAN) called CAN+ is proposed with which the target rate of 1Mbit/s can be increased up to 16 times, and the fact that data could be sent in time slots, where CAN-conform nodes don't listen is exploited.
Abstract: As the number of electronic components in automobiles steadily increases, the demand for higher communication bandwidth also rises dramatically. Instead of installing new wiring harnesses and new bus structures, it would be useful, if already available structures could be used, but driven at higher data rates. In this paper, we a) propose an extension of the well-known Controller Area Network (CAN) called CAN+ with which the target rate of 1Mbit/s can be increased up to 16 times. Moreover, b) existing CAN hardware and devices not dedicated to these boosted data rates can still be used without interferences on communication. The major idea is a change of the protocol. In particular, we exploit the fact that data could be sent in time slots, where CAN-conform nodes don't listen. Finally, c) an implementation of this type of overclocking scheme on an FPGA is provided to prove the feasibility and the impressive throughput gains.

101 citations

Patent
David I. Poisner1
29 Sep 1999
TL;DR: In this paper, an overclock deterrent mechanism of a chipset which comprises an over-clock detection circuit for detecting over-clocking of a system (processor) clock signal based on comparison of ratio of the system clock signal which is likely to be overclock and a fixed, stable reference clock signal that is highly unlikely to be overheating is presented.
Abstract: An over-clock deterrent mechanism of a chipset which comprises an over-clock detection circuit for detecting over-clocking of a system (processor) clock signal based on comparison of ratio of the system (processor) clock signal which is likely to be over-clocked and a fixed, stable reference clock signal which is highly unlikely to be over-clocked, and an over-clock prevention (thwarting) circuit for deterring such an over-clocking by either disabling operations of a computer system or significantly undermining key operations of a computer system.

101 citations

Posted Content
TL;DR: In this paper, the authors present a large-scale assessment of GPU error rate, conducted by running MemtestG80 on over 20,000 hosts on the Folding@home distributed computing network.
Abstract: Graphics processing units (GPUs) are gaining widespread use in computational chemistry and other scientific simulation contexts because of their huge performance advantages relative to conventional CPUs. However, the reliability of GPUs in error-intolerant applications is largely unproven. In particular, a lack of error checking and correcting (ECC) capability in the memory subsystems of graphics cards has been cited as a hindrance to the acceptance of GPUs as high-performance coprocessors, but the impact of this design has not been previously quantified. In this article we present MemtestG80, our software for assessing memory error rates on NVIDIA G80 and GT200-architecture-based graphics cards. Furthermore, we present the results of a large-scale assessment of GPU error rate, conducted by running MemtestG80 on over 20,000 hosts on the Folding@home distributed computing network. Our control experiments on consumer-grade and dedicated-GPGPU hardware in a controlled environment found no errors. However, our survey over cards on Folding@home finds that, in their installed environments, two-thirds of tested GPUs exhibit a detectable, pattern-sensitive rate of memory soft errors. We demonstrate that these errors persist after controlling for overclocking and environmental proxies for temperature, but depend strongly on board architecture.

98 citations


Network Information
Related Topics (5)
Cache
59.1K papers, 976.6K citations
69% related
Scalability
50.9K papers, 931.6K citations
69% related
Key (cryptography)
60.1K papers, 659.3K citations
68% related
Encryption
98.3K papers, 1.4M citations
68% related
Cryptography
37.3K papers, 854.5K citations
67% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20219
202013
201916
201816
201710
201612