[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[ale] Overclocking and Failure Rates



Hello,

  I've been reading the beowulf mailing list for sometime and periodically
I see some discussion of Overclocking CPUs (particularly Celerons). There
is usually some heated discussion about what Overclocking does (the general
feeling is that it increases the error rate - i.e., one error in 1x10^18
computations or something like that). To get a better feel for this, I
have several questions for knowledable people.

1. For non-over-clocked CPUs, what kind of "error rate" is usually expected
   and what does Intel quote?

2. What kind of testing does Intel do to establish if a CPU is acceptable
   at a certain clock speed (e.g. it fails at 400 MHz, but not at 350 MHz
   so it's sold as a 350 MHz CPU)?

3. Is there any general feeling about what over-clocking does to error rates
   (I realize that this depends on a great number of parameters and a very
   complex model involving heat transfer, etc, but I'm looking for ANY kind
   of feel for this).

  I know this post doesn't pertain to Linux specifically, but I thought that
the folks who read this list would be good ones to start with.

TIA,

Jeff Layton

g593851 at fs1.mar.lmco.com
-or-
laytonjb at mindspring.com

P.S I'm not planning to Over-Clock anything, but I've been reading the heated
    discussions on the Beowulf lists and I've also seen lots of "benchmarks"
    for Over-Clocked processors and wondered about how "valid" these numbers
    were.