Tee State

The original text below seems slightly off the mark.


Original Text:

A T state is a clock cycle. Early microprocessors required multiple clock cycles per instruction. Later RISC processors achieved one instruction per clock cycle by pipelining: it still required multiple clock cycles to get anything done, but multiple instructions were active simultaneously, so it averaged one instruction finished per clock cycle.

Although various technologies have been applied to improve performance beyond that (longer pipelines, very long instruction words, more recently "threading", etc), one difficulty is that processor speeds have for a long time advanced faster than DRAM speeds, so that memory gets slower and slower relative to cpu.

Wider buses have helped. Larger caches have helped. Longer pipelines have helped (although with a nasty cost on interrupts/context switches). Newer memory technology such as RAMBUS and competitors have certainly helped. But the industry nonetheless gets closer to a crisis on this subject with each passing year.

This is not the same as the infamous Moore's Law crisis that many have long predicted; this is an earlier coming-crisis caused by the difference in slope between Moore's law for DRAM versus that for CPUs.


A T state is a clock cycle. Early microprocessors required multiple clock cycles per instruction. Later RISC processors achieved one instruction per clock cycle by pipelining: it still required multiple clock cycles to get anything done, but multiple instructions were active simultaneously, so it averaged one instruction finished per clock cycle.

Superscalar architectures with multiple dispatch units and rendundant functional blocks (ALUs, load/store units, etc.) can frequently have throughput of greater than one instruction per clock cycle. Until you get a cache miss....

Although various technologies have been applied to improve performance beyond that (longer pipelines, very long instruction words, more recently "threading", etc), one difficulty is that processor speeds have for a long time advanced faster than DRAM speeds, so that memory gets slower and slower relative to cpu.

Wider buses have helped. Larger caches have helped. Longer pipelines have helped (although with a nasty cost on interrupts/context switches). Newer memory technology such as RAMBUS and competitors have certainly helped. But the industry nonetheless gets closer to a crisis on this subject with each passing year.

This is not the same as the infamous Moore's Law crisis that many have long predicted; this is an earlier coming-crisis caused by the difference in slope between Moore's law for DRAM versus that for CPUs.


A nearly extinct species of metric for gauging performance of instruction sets in processors, often heard in the expression "T-states per instruction."

Over the last 20 years I have heard the mantra "we don't count T-states any more" because "the processor is so fast it doesn't matter."

And the product doubles in size and resource requirements. And we can't find a faster processor at a reasonable price. And we start counting T-states again. And then they release the 80386 and, magically, T-states don't matter any more.

So we've switched to the ARM processor, and it's got so much power we don't have to count the T-states any more. Again.

Except that now someone wants to implement RealTime stuff on WinCe ... and deliver rich content at the same time ... and we're starting to wonder if we'll be able to find a T-state in the wild. We have archeological evidence that they were once plentiful here, roamed the vast plains of RAM in large herds. Some cataclysm must have wiped them out.

Thinking about the magnitude of such an event is enough to make you WinCe.


What is the etymology of 't-state'? Is it 'transition state', or 'state at time t', or something else? - JayOsako


EditText of this page (last edited March 13, 2011) or FindPage with title or text search