That's Not A Bug -- That's A Feature! -- Famous working paradigm.
In truth, many wonderful facilities of modern widgets were once bugs. People can have serendipitous discoveries about problems that evolve into highly useful features.
Intel, creator of the (in)famous X86 series of processors (that have caused a revolution throughout the world), was (and is) known for hardware bugs in their devices that they would refuse to admit were there. If enough scrutiny was applied by the electronics engineering world Intel would eventually document the bug and call it a feature. Intel's hardware history is rife with examples.
[MartySchrader asserts that the previous is true, and certainly has more experience than I, but the lack of specific examples generated enough argument for the sake of argument that I feel it appropriate to remove it. --AdamBerger]
Well, gee, Adam -- thanks for that. <ahem> However, that doesn't negate the fact that Intel's CPU, MMU, PIA, and communications hardware all had a raft of underlying bugs that pretty much anybody who built Intel-based products eventually discovered. In the last twenty years Intel has created more than enough good, working, bug-free devices to make up for their checkered past, but I for one am not forgetting. -- MartySchrader
Would it kill you to give the poor guy an example? Easily the most famous and severe example was the bug in their floating point divider in their first generation Pentium. This became known as the "Pentium FDIV bug". Intel and many onlookers expressed the opinion that it wasn't very important because division was only wrong sometimes, not always, and most consumers wouldn't care.
This attitude annoys me to this day, even though it's true that most people don't care, because the point is that it does matter to some people. It was discovered by Thomas R. Nicely, a researcher in Computational Number Theory, not because he was looking for bugs, but in the course of his work. I've written a lot of code to do computational number theory, and I would be more than a little annoyed if, while doing experimental algorithms, I find I lost a month of work because of a CPU design flaw that I had been assuming was a design flaw in my new algorithm.
Those who remember the early days of workstations no doubt also remember the raft of "Dual Processor Motorola 68K" offerings. The reason they had two processors was that the original M68K handled a page fault incorrectly (it could not correctly restart the instruction that generated the fault). The only solution -- until the next rev chip was released -- was to add a SECOND chip, running the same code, so that on a page fault the system switched to the OTHER chip. Hence, the ubiquitous "Dual Processor" claims. Another bug became a "feature".
-- TomStambaugh
More specifically, the 68000 itself didn't even attempt to handle virtual memory page fault interrupts, so a workaround like that was necessary for any similar chip -- it was done for 8080s/z80s, for instance (I architected such a thing for z80s at a startup circa 1980 -- not very ambitious, just page mapping to bump the total physical memory to 1Meg or so).
The 68010 took a shot at it, but it and its companion VM chip had a raft of problems, the highest profile one being that it added a clock cycle to every virtual memory access compared with physical memory accesses, so places like Sun Microsystems and Apollo had to continue doing nasty kludges.
The 68020 finally did things more or less right.
Meanwhile Intel was apparently not allowing anyone who had studied computer architecture anywhere near the future-generation design team, so after the 8086 there was the 80186, which was broken in regards to memory management, the 80286, where they had the bright idea that virtual memory was a bad idea (I did development under Xenix on an 80286 for years, and it was truly painful, but Intel kept insisting it was the best thing since sliced bread)
... the 80386, where they started to pretend halfheartedly that maybe virtual memory shouldn't be prevented, the 80486, where they started to realize that maybe they'd been screwing up, and then finally, belatedly, the 80586/Pentium, the first in the line to be able to wholeheartedly support real operating systems. Those were some bad times. -- DougMerritt
Concur. I built the OS for an 80186-based nuclear medical image enhancement instrument back in '85-'88 and suffered much pain as a result. Finding out how Intel mapped their memory management and PIA hardware registers on top of each other despite having tons of unused I/O address space was proof to me that these guys didn't care about fixing previous problems; they just wanted to knock out the next processor and beat The Other Guys to the punch. My hope is that everybody on the early X86 silicon teams have all been killed and eaten by now. -- MartySchrader
[http://techfox.keenspace.com/d/20030228.html]
You may be familiar with the story of a moth being found and removed from Mark II Aiken Relay Calculator (an early computer), thus giving rise to the term "bug" in the computer sense. I always used to like to joke that if they found a moth in a big computer these days, they wouldn't call it a bug, they would call it an undocumented creature. -- ThomasColthurst
We had a web app that had page A and page B. Due to a network mistake, it took about 4 seconds for the link to B from A to bring up B. Later the network mistake was caught and fixed. The transition from A to B was then almost instantaneous. It was so quick that users often didn't know they just went from A to B and filled up the help-desk with complaints related to confusing A for B or B for A, or that the link "didn't work". This triggered a discussion on how to visually differentiate the two pages to make the transition clear. The prior lag had inadvertently served as a visual signal of a page change.
Characteristic of: http://en.wikipedia.org/wiki/Br%27er_Rabbit
See: FreudianTypo