Software Industrial Revolution

"Planning the Software Industrial Revolution", by Brad J. Cox Ph.D, November 1990, IEEE Software magazine

http://www.virtualschool.edu/cox/pub/PSIR/

"The possibility of a software industrial revolution, in which programmers stop coding everything from scratch and begin assembling applications from well-stocked catalogs of reusable software components, is an enduring dream that continues to elude our grasp..."

The answer to why that is becomes clear if one looks at which kinds of software can be accelerated by moving their kernel into a XILINX FPGA, versus those which cannot. The majority cannot. The parallel between reusable electronic components and reusable software components is largely a false one.

Another way to understand this is to look at modern hardware development. Is it really so much more advanced than software, as has been claimed for decades? By now we know the supporting evidence, but sadly neglected is the primary refutation: anything hardware can't handle is delegated to software.

By the Church-Turing hypothesis, anything computable can be computed by software. By contrast, non-programmed hardware is not TuringEquivalent. So assumptions that what works for hardware should work for software are unfounded. There has never been a complete reusability solution found for anything TuringEquivalent in any domain; all such are equivalent to finding the solution for software. -- dm

I understand your argument but I had to scratch my head a couple minutes to figure out why you were talking about hardware at all. Perhaps a bit more exposition is in order?

Probably. I'm going out into the BigBlueRoom right now, though. The essay linked to above has the word "chip" at one point that could at least give a little idea if you look at Brad's text surrounding.


"[A]nything hardware can't handle is delegated to software."

True, but most of what is delegated to software could have been handled by hardware if cost wasn't a limiting factor. Using atoms instead of bits to build Excel or an MPEG codec isn't impossible, just too expensive. Much of the software we write could be handled by hardware if cost was not a factor. So the question is, as the reusable modularity of software approaches that of hardware, does the cost decrease due to greater efficiency or increase due to greater rigidity? In our quest to emulate the world of electronic circuit design, are we just recreating the environment that pushed so many solutions from hardware to software in the first place?

No. I meant what I said the first time.

Excel is a particularly strong example. Show me Excel implemented "purely" in hardware, and then allow me to change all of the hardwired ones and zeros that appear in the design, and I'll turn it into the game of life, a wiki, or boot Linux on it, just by reprogramming those bits.

You don't need to hardwire any ones or zeros to build a hardware Excel machine. You could do it all with pneumatic tubes, given enough cash. I don't think you're understanding what I'm saying. Consider the CD player. It replaced analog devices (phonographs and cassette tape players) with lasers and DSPs. Over the years more and more of CD player hardware has been replaced with software because of economies of scale. It's cheaper to buy general purpose embedded controllers (that are also used in other devices going through the same replacement process) than build custom electronics for each type of device. That's why DVD players can be cheaper than laserdisc players. You could make a DVD player without a computer in it, but it would cost more.

Hardwiring circuitry only makes sense for special cases. In the general case, algorithms need flexibility of datapaths, which leads to reinventing the CPU, in effect, even if you're trying to avoid it.

Beyond a certain level of complexity, or when things aren't sufficiently special-purpose, hardware must delegate to software of some sort, even though it may not be traditional software. E.g. the configuration of FPGAs is a software issue, although not one anything like Pentium machine code. It is a collection of bits that specify interconnect and contents of lookup tables and such, and that is "software".

Agreed, but my point is that cost, not ability, is the driving force for using software in most solutions today.


Re: "The possibility of a software industrial revolution, in which programmers stop coding everything from scratch and begin assembling applications from well-stocked catalogs of reusable software components, is an enduring dream that continues to elude our grasp..."

I believe there are two reasons for this "Lego Wall":

-- top

I think you are correct, but that there are other reasons as well. For instance, to some extent programming is sometimes about transforming abstract thought into a computation. As long as there are new thoughts, there will be a need for new software. And regardless of how good the off-the-shelf building blocks, a sufficiently complex new structure will require expertise and significant time and effort to assemble.

Legos themselves, for instance, are a pretty good building block, and people do build things like replicas of the Eiffel tower with them, but doing so is nontrivial.

There is no reason whatsoever (except wishful thinking on the part of e.g. pointy haired bosses) to assume that someday, all it will take is a non-expert to plug together a very small number (e.g 2...let's say 10 if you're generous about non-experts) of building blocks in order to accomplish any arbitrary task that otherwise would require programming. And in fact, there are a vast number of reasons, including quite a few apropos mathematical theorems, to say that this is impossible in some regards (the other regards haven't been proven impossible yet, but just because no one phrased the question formally yet.)

-- DougMerritt

Although...you can already plug together fairly small numbers of building blocks to build some really complex software, for certain values of software. Think of all the web browsers that pop up using the Gecko engine or KHTML. Or all the programs that now embed Perl, Lua, or Guile as their scripting engine. Or the MS .NET demos that show a working Outlook clone in 5 lines of code, because they embedded the Outlook component.

The problem is that interesting & profitable computer problems, almost by definition, require that you do something new that people haven't already done. The Gecko clones have almost universally failed outside of specialty products, because why bother using Joe Blow's Open-Source Web Browser when you can download MozillaFirefox or MozillaBrowser? Before embedding became commonplace, writing a scriptable system was something that only large corporations and RichardStallman did - now a lot of small open-source programs include a scripting engine. But routine feans go unnoticed, because the bar's been raised.

I'd contend that the software industrial revolution has already happened. You just don't notice it, because the programs we're asked to build get complicated more quickly than our ability to build them. Notice how nobody writes operating systems or BasicLanguage interpreters from scratch, in assembly, anymore. Even compilers often don't compile down to native code anymore - most use C or JVM or CLR as an intermediate language, which then gets compiled by stable, age-old compilers. Every time I play with a PHP webapp, I'm using a bunch of components that I don't even think about - PHP, MySql, Apache, Linux, various modules & libraries for all 4 of those, TCP/IP, Ethernet/PPPoE/PPP, Windows, MozillaFirefox/InternetExplorer, and lots of stuff I've never even heard of. That is a software industrial revolution - I never even touch the bits on the wire, nor manage memory in the computer, nor determine how data is stored on disk, nor figure out which process will get the CPU next. -- JonathanTang

That's a good point. And I didn't say there were no examples of modularity; quite the contrary.

But let me be more explicit about the formalisms. Modular composition can obviously be modelled algebraically, any of which will necessarily have factorization (unique or not) or else it won't support composition, and most interesting such models will be infinite rather than finite groups, and will have an infinite number of prime/irreducible elements.

The irreducible elements by definition cannot be built by composition.

This isn't a proof about software - it's hard to have formal proofs about the real world - it's fleshing out what I meant by "are a vast number of reasons, including quite a few apropos mathematical theorems, to say that this is impossible in some regards (the other regards haven't been proven impossible yet, but just because no-one phrased the question formally yet.)"

This can obviously be criticized; we do fairly well "composing" programs out of ideals such as instructions - but one must be careful to distinguish addition from multiplication. Composition of software modules must be "multiplication" else the dream is gone already; it is specifically not addition. Thus, primes/irreducibles, reachable by addition (the hard way, writing new modules), not the easy way (composing old modules).

Although again, not a proof, I hope that this example helps motivate the notion that, yes, we have far more reasons to disbelieve that all software will eventually be purely composable, even though, yeah, it's very worthwhile when we can.

-- DougMerritt


I find DM's argument from analogy with integer primes to be very convincing. I have reservations about the argument that everything which is too difficult in hardware has been left to software, thus software is inherently more difficult than hardware and hardware can't be used as a basis for inspiration.

The conclusion logically follows from the assumption and the assumption is largely correct. But it's not exactly correct. It's true that everything that's been deemed too difficult in hardware has been relegated to software. But that's not due solely to essential difficulties in the problems unsolved by hardware. It's significantly due to the fact that hardware designers are retards.

There are plenty of boneheaded designs in hardware. There are plenty of backwards features. There are plenty of missing or useless features. What's lacking is any real understanding of what the hardware is going to be used for. What's lacking is any understanding of software itself. What's lacking is any comprehension of software evolution. What's lacking is vision.

Hardware designers are 20 years behind the times. But come to think of it, software programmers are also 20 years behind the times. And they too are retards. So why shouldn't one learn from the other? Sounds like the perfect marriage made in hell. -- RK

Yeah, well, we're all retards. As you know, though, it's pretty hard work coming up with truly good (as opposed to half-assed sort-of-good) solutions to things, and pragmatically, most people, even academics, work under deadline pressure that makes that nearly impossible.

I agree with you that the hardware/software argument was only approximately correct, not exactly correct, but it's not a bad approximation. (And the cost issue mentioned above is true but is really a different topic.) -- DM


CategoryPaper


EditText of this page (last edited October 28, 2005) or FindPage with title or text search