Gross Deficiencies Of Unix

This page originally contained an invective-filled rant by RichardKulisz that was not specific to Unix. What signal it possessed was later explained more clearly - and far more positively - in BlueAbyss, OperatingSystemsDesignPrinciples, NewOsFeatures, and ObjectBrowser.

See also: UnixAndWindowsHell

Interesting snippets preserved (after massive cleanup):


Regarding "Freedom":

The point of this rant is that I cannot realistically do anything. It is most decidedly not worth my while to fix ANY problems. Unix relies on economical tyranny to enforce its technocracy; there is no economical way to fix anything.

When you're talking about the "freedom" to change things in Unix this is fake freedom, not real freedom. For freedom to be genuine it has to be realizable. It must be possible for ordinary people, under ordinary conditions, to act upon this freedom and enjoy its use. People having to learn things is inexcusable. Freedom doesn't exist if, whenever someone ever tries to act upon it, there's always some excuse given for why they can't. Under the Unix junta, everyone has the fake freedom to invest several thousands of hours learning C/C++ programming and the Unix system in order to change it.

When you live in a system with a measure of real freedom, you can actually acknowledge its defects and try to fix them so as to grow its freedom. When you live under a junta, you have Stockholm syndrome. That's the difference between a system that's fundamentally sound and one that's fundamentally broken.

I am just as much enslaved under Unix as I am under MacOS or Windows.

Genuinely free software would have to be designed. It would have to be created and maintained by people with the right attitude. And it would have to be sustained under the right kind of license, neither proprietary nor so-called free software.

(UnixJunta? has a much zestier ring to it than GrossDeficienciesOfUnix)


Regarding Linux and OpenSource software "evolution":

*n*x isn't developed by design, it's developed (if anything) by evolution, with every developer free to choose his or her own approach and toss it into the fray. Eventually, one might predict that some process of software survival-of-the-fittest will result in a well-integrated set of flexible, powerful, intuitive, and easy to use applications, but if it happens at all, it's bloody slow going. I'm not holding my breath, and I've been using Linux for specific purposes since 1992, using it almost exclusively on the desktop since 2000, and I'm still stuck in a dysfunctional love-hate relationship waiting for it to happen.

However, as bad as it is, it comes closer to doing what I need than anything else. Unix may be bad, but the only thing worse and still usable is everything else.

This will probably only change when along a new OS appears that is sufficiently revolutionary to be instantly valuable, sufficiently familiar or intuitive to be immediately usable, and has enough of what everyone wants (probably via emulation and/or virtualization, at least at first) in terms of available or bundled application functionality to not seriously disrupt workflow.

Nobody who knows biology would wait for a solution to evolve. The truth is that evolution sucks. Evolution, if given a million times more time and a gagillion times more opportunity, still can't come up with designs even halfway as simple and powerful as a single designer. People need to appreciate that evolution works on geologic timescales.

I'm not entirely sure what your point is. Genetic algorithms find solutions, at least in some areas, where humans have not. But evolution (both natural and synthetic) notoriously finds solutions which are, once examined, unnecessarily complicated, non-linear, unaesthetic, etc. Despite such faults, both biological evolution and genetic algorithms can sometimes at least find solutions where humans have difficulty finding solutions. Not always, but demonstrably sometimes.

"difficulty finding solutions"? I'm betting those problems are of the balancing of arbitrary forces kind. In any case, your example doesn't matter and the reason why is because you're comparing silicon evolution with biological brains. If you compare silicon evolution with silicon brains (AI or expert systems) then the latter will win every time. The same obtains if you compare biological evolution with biological brains. We live in a very unusual era where we have silicon evolution available but not silicon brains, and even then biological brains routinely find orders of magnitude better solutions than silicon evolution, by your own admission. My point is this: people radically overestimate evolution, and if they appreciated how much worse it is than actual thinking then they'd go with thinking every time.

YouCantGetThereFromHere barriers are omnipresent in evolutionary development. The dual problem for designers is being able to see the totality of everything and being stunned thoughtless at its vastness and magnificent intricacy. So much more is even possible for designers that it's easy to understand why designers, but not evolution, actually do need some time to get started. It takes more time to get a house if you start with 'what kind of dwelling do you conceive of living in' than 'which of the housing models in this subdivision do you want?'


a saying of Jayadev Misra: "...And, I contend that lack of useful theorems about a typical C++ program is what makes it so difficult to make claims about it: its intent, whether a particular change will have a disastrous effect on its execution and whether it can be integrated with other programs. In other words, we typically build mathematical systems whose properties we cannot discern. "


Regarding the 'broken' distinction between ApplicationPrograms and the OperatingSystem:

The distinction between core OS and desktop applications only exists in broken systems like Unix and Windows. An OS is a complete integrated programmatic base. If the desktop applications aren't part of the core OS then that means they can't be used programmatically in substantially the same way as the core OS, or more likely they can't be used programmatically at all, hence they're broken. -- RK

If you believe direct, top level HCI functionality belongs in every OS deployment, then I disagree. Many OS applications require only enough HCI to get them configured, if that, and anything more is needless overhead. -- DV

You're confusing the issue of packaging with the issue of what is and is not part of the OS. The requirements of any particular installation of the OS have nothing to do with what is and what is not part of the OS. That's a packaging issue, not an OS issue. An OS is an integrated system that provides a complete programmatic base following certain defined principles [related: OperatingSystemsDesignPrinciples]. Which principles those are depends on the OS in question and in fact defines the OS. Every piece of software that's consistent with the principles of a particular operating system and can be used programmatically is an integral part of that OS.

{One common distinction is between the OS "kernel" itself versus all the things bundled with that kernel to make it usable (libraries, GUI, etc). This distinction is sometimes problematic with systems where traditional kernel functionality doesn't always run in privileged mode (e.g. user space filesystems), but in any case it aids communication to keep in mind that unqualified terms like "OS" can mean different things to different people.}

{I've seen a lot of confusion caused by differing terminology over the years, but I certainly understand the technology involved, by whatever name, and rather than argue with people about how words like "OS" should be used, I prefer to simply understand what people do mean, and then continue talking on that basis. I thus find no need to argue with either Dave nor RK about the usage of "OS", despite their usages being somewhat different, and despite RK's usage being some more idiosyncratic than usual -- he has his reasons for it, and that's ok by me.} -- Doug

If you look at all the definitions provided by others, you'll find they're useful only to special interest groups (eg, vendors and lawyers) that have nothing to do with OSes by their nature, but merely happenstance. In contrast, OS design is an essential part of the OS nature. It's completely impossible for a thing to be an OS without having been designed, whereas it is perfectly possible for a thing to be an OS without ever being packaged, distributed, installed, sold, argued about, or even used. The idea that packagers get to define what an OS is right up there with politicians defining pi.

You can't define an OS in terms of a list of features it's supposed to have because that list of features changes every decade.

It's a fact of usage that nobody, absolutely nobody, considers the Chromium arcade game to be a part of the Linux OS, even if it's on every single distribution. It's simply not part of the OS even if it's installed and it's a requirement of the installation, whether the distributor specifies it as a req or whether the user does. So clearly "what comes on the CD" or "what gets installed from the CD" are invalid.

People not beholden to any special interests like selling OSes or adjudicating laws on them, draw the boundaries of the concept of 'OS' along principle / integration lines. There's some kind of highly integrated system which is the core of the OS (this is not the kernel) and then there's a halo around it (X belongs to this halo) and then there are things which are completely outside of the halo. And, depending on context, depending on exactly how much you think integration matters, you may consider the halo to be part of the OS or you may not.

"ApplicationPrograms" whose functions can't all be accessed programmatically fall outside of the halo on two grounds. First, because they can't be operated on programmatically, and second because they are themselves highly integrated. A system, any kind of system at all, is a stew of interacting objects. If the chunks in the stew are whole (complete in their functionality) and solid (highly integrated) then they're not really parts of the stew, they're just baked potatoes that have been tossed in at the last minute. Ideally, you want to mash everything into little bits until it's a soup (that's Unix's core principles) but you can tolerate a few chunks. What you can't tolerate is icebergs just floating on their own and creating their own weather environment. -- RK

Am I to understand that you believe the OS (regardless of definition -- let's ignore that for now) and application functionality should represent a programmable continuum, based on common concepts, from lowest level hardware abstraction to the highest level of the HCI, including what we currently call applications, thus rendering moot the distinction between OS and application? And, therefore, it is meaningless to speak of (say) a high level GUI being "part of the OS" or not, as it's all part of the OS, whether actually installed or not, unless specifically designed to not be programmed and/or not partake of the aforementioned common concepts? -- DV

Yes, that's it exactly. You got it completely. And further, what are currently called applications are with very few exceptions just non-programmable systems software created specifically with the malicious intent of monopolizing power / preventing users from reprogramming them or using them in novel, non-approved ways. -- RK

[Alternatives to ApplicationPrograms: NoApplication, ObjectBrowser, NakedObjects / AutoGenCrudScreens, DocumentDefinitions. RK has been a vociferous proponent of the ObjectBrowser concept, which is one of four core principles to his BlueAbyssFramework.]


Quality of Service, and Priorities

If you look at CPU usage, I suspect you'll find that it's fractal just like InternetTrafficIsFractal. Now, the rule to upgrading network links is that a link that's utilized 60% during peak periods needs to be upgraded. In the case of networks, more capacity is cheap (the cost is either proportional or less), capacity grows faster than usage so there's plenty of it available, and there are fundamental limitations on the predictability of internet usage (it grows very quickly and unpredictably). That all conspires so it makes perfect economic sense to buy 10 times more capacity than you need and to keep it that way.

In the case of hardware, none of these factors apply. Significantly faster CPUs / RAM systems (eg, 100x) simply don't exist, what you can upgrade to is only slightly better and it's disproportionately expensive to upgrade to, CPU speeds aren't increasing all that much, and CPU / memory usage is actually predictable if you stick to common needs. All of this means that buying better hardware is not a sensible strategy, hell it's often not even an option. And what that means is that you need QoS. Common Unices don't have QoS.

First, Unix has always had shitty resource management. Unix comes from batch system days and it's never assimilated interactivity. One of the principles of interactivity is that you attend to the user first, not last and not whenever you get around to it but first, always first. This means that you don't optimize the scheduler for "most CPU usage", you optimize it so that the system is responsive to the user first. The lion's share of the system resources go to attending the user, and whatever's left over may go to actual computation. So if there's a choice between cranking out some numbers the user indirectly requested and attending to a user event they generated right now then the correct choice is to attend to the user event immediately; the Unix choice is to let the user wait, possibly forever. Same thing with the network. If the network is hosing the system and the user has generated any kind of event at all, then drop those packets!

Second, Unix has never had a workeable account management system because it's never had workeable security; it's stuck with ACLs. Is it really any wonder that some distributor made the retarded decision to not give the user the root password to their own system? In the context of Unix's irreversibly destructiveness (no versioning), witholding the root password almost seems reasonable.

Fourth and third, the inability to close the package manager or cancel its operation once it's running is an example of several problems. It's an example of the fact that windows / applications aren't live objects, they're merely representation of live objects (processes) running in the background somewhere. Since you're not dealing with live objects, you don't have reflection on those objects. So you can't inspect a process or stop it or kill it or cancel its operation or backtrack it.

This is the result of a few things:

First it's the result of the fact that Unix never assimilated interactivity, which you can see in Unix shells. Unix shells don't have ways to reverse, aggregate or reorder operations. There isn't always a way to cancel an operation (as opposed to killing a process) in Unix because it was never meant to be interactive.

Second, it's the result of the fact that Unix doesn't have a native GUI. Operations performed using the CLI have a certain minimal level of reflection; job control exists. (Note also that Unix CLIs are programmable while GUIs are not.) This is NOT the case for Unix GUI. Why is that? Because Unix stole its GUI from Smalltalk and the Smalltalk OS depends on the Smalltalk language to provide reflection and programmatic access. Unix's GUI can't be reflective because it would have to reimplement Smalltalk.

Third, it's the result of the fact that Smalltalk's GUI objects model is broken (WimpIsBroken). MVC separates representations of objects from the objects themselves. In principle a good thing, in practice abused so that representations are complex objects themselves. The only way to solve this is to automate the representation of objects so that programmers can't fuck it up. But the long and the short of it is that when Unix imported Smalltalk's GUI, it couldn't import a working GUI model (like Morphic or NakedObjects) because Smalltalk doesn't have one. -- RK


Unix is a Field of Shards...

I don't think anyone but the wettest noob would consider Unix anything but a field of shards, but the only thing worse than Unix (and still usable) is everything else, and every half-aware Unix user knows it. It's not going to change dramatically without becoming something else entirely. -- DV

One could disagree with DV's phrasing, but the germ of truth is that people who know Unix well will be rather keenly aware of a variety of defects in Unix, even if they generally like it outside of the areas of its defects, unless they are unreasonable raving fanatics, which really is not all that common on any topic, although of course it does happen.

In other words, it's not the case that everyone who says "I like Unix" is unaware of the fact that it has problems.

Particular examples: I like Unix pipes, even though they have certain defects, because their presence is better than their absence. I like regular expressions on Unix, even though they have certain defects, similarly. In both cases I have extensively used systems lacking both, and it sucks not to have them. Fortunately the world has come to its senses regarding regular expressions, after lo, these many decades, and they are getting to be common in all environments and supported by many apps in many environments -- typically without improving on the original defects in question, alas, but at least they're there. -- Doug


Case sensitivity is unnatural to the majority of users and can cause a lot of headaches and help-desk calls.


See HowStandardsEmerge


CategoryOperatingSystem

DecemberZeroFive, cleanup JulyZeroNine


EditText of this page (last edited December 21, 2012) or FindPage with title or text search