[time-nuts] Re: When did computer clocks get so bad?

Poul-Henning Kamp phk at phk.freebsd.dk
Wed Sep 29 22:29:39 UTC 2021


--------
Alec Teal writes:

> So you suspect/expect around the time frequency changes started 
> happening, clocks became crap?

Well, it gets complicated fast there, but yes, that's pretty much
where the shit inescapably hit the fan.

Previous to that, most CPU's were clocked with a small PLL chip
which multiplied a 14.318MHz X-tal to whatever was needed.

"Not good, not terrible" is probably a fair summary.

But there are two different things at work here:  On one hand
the choice of Xtals:  To keep the jitter on the multiplied
clock low, the Xtal had to be better and better.  (This is
something the "extreme overclocking" people totally fail to
consider when doing their high-school physics experiments.)

But on the other hand, the CPU architecture must offer
/something/ to the kernel to keep time with, and that's what
Intel utterly fiasco'ed because of their windows focus.

The entire ACPI-solves-that, ACPI-without-gliches-solves-that,
reading-ACPI-is-faster-now saga was just sheer incompetence.

But the ACPI running at 14.318 MHz is inadequate for most modern
kernels in the first place, and that gets us into the TSC-counts-cycles,
TSC-counts-cycles-without-overflow-issues,
TSC-counts-cycles-onlyfor-this-core, TSC-pretends-the-core-is-
always-running-at-the-same-speed saga, which was also caused by
Intels idiocy.

Admittedly, there are good and sane explanations why this is
a hard problem to solve, but competent people solved in in
other architectures decades before Intel botched it.

-- 
Poul-Henning Kamp       | UNIX since Zilog Zeus 3.20
phk at FreeBSD.ORG         | TCP/IP since RFC 956
FreeBSD committer       | BSD since 4.3-tahoe    
Never attribute to malice what can adequately be explained by incompetence.




More information about the Time-nuts_lists.febo.com mailing list