What would information technology (IT) be like without academics? Would it impact software as much as hardware?
From CrossToolTypeAndObjectSharing:
I suspect that a lot of things we use would have eventually been "discovered" organically [without academics]. Even IBM-card processing machines kind of resembled relational and DB operations such that one machine may filter, another machine group and sum, another join, another union, another sort (sort and join may have been using the same machine), etc. But without a parallel world to test on, it's only speculation for either party. It's my opinion that academics over-emphasis their importance in IT. Perhaps it's human nature to magnify our role's importance, regardless of what it is. -t
It's unquestionable that many discoveries would be made without academics, and indeed many important discoveries are made outside of academia. The RelationalModel could easily have been invented at a kitchen table, and in fact was invented by DrCodd whilst he was at IBM's San Jose research lab, so it's (a trivial) debate as to whether it originated inside or outside academia. Popular documents don't say, but it could have as easily have been inspired by looking at paper spreadsheets, or from pure whim, as deep mathematical thought.
What characterises academic work is not the origin of inventions -- which can, and often do, come from anywhere -- but the rigour with which inventions are critiqued, compared, tested, and evaluated, and the degree to which the implications of these are explored or applied to other fields. As such, the real impact of academia on the RelationalModel is not where it came from, but the way in which subsequent academic work has proven (for example) that relational expressions can be transformed in specified ways without changing their semantics. This has a serious impact on automated optimisation, which is of obvious pragmatic value, but one that can only be guaranteed to be correct via appropriate mathematical analysis.
The non-academic alternative would presumably be pure inspiration followed by exhaustive testing. This process may miss out on valuable transformations, and thereby limit opportunities for optimisation, and it cannot guarantee that there aren't transformations that will, under obscure or non-obvious conditions, produce erroneous results that brute-force testing might fail to reveal.
The history of relational suggests it was not "academic testing" that projected relational, but it's ability to express typical queries in compact ways compared to the NavigationalDatabases of the day. In other words, brevity in expression. Witnesses talk about "query shoot-outs" with regard to brevity. There was not one person who pursued it, so different stories may be different. The account I read emphasized it's EconomyOfExpression, but others may have liked other features of it. Further, automated optimization probably required too much horse-power in relational's early days.
I didn't write that academia "projected relational" (whatever that means). Please read what I wrote -- in particular, the fact that I wrote "for example". To my knowledge, Oracle 2 supported automated optimisation as early as 1980, not that it's relevant to what I wrote.
We'd probably have to test a parallel world without automated optimization (or alternative optimization) to compare on that factor.
I could write efficient algorithms without learning O(n log n) etc. Just time the damn thing. I could write a useful language without understanding the concept of TuringComplete. I haven't used LinearAlgebra to help me program yet. However, learning Computer Science in school helped develop in me a passion for programming, and that is the most beneficial outcome of academia. It's the experience of learning that is more valuable than the content (in this area).
"I could write efficient algorithms without learning O(n log n) etc. Just time the damn thing."
I've no doubt you could write them. However, timing them with a stopwatch only gives you an absolute measure of one implementation in one case, which is dependent (obviously) on numerous variables. BigOh gives you a universal relative measure of asymptotic performance, allowing you to effectively compare one algorithm against another, entirely independent of implementation.
In the "real world", both BigOh and run-time analysis are important. In terms of selecting algorithms, BigOh can be crucial, allowing you to trivially discard the O(n!) algorithm in favour of the O(n^2) one when n is predicted to be large, or perhaps choose the O(n!) algorithm because it's easier to implement and n will be, at most, 3.
As for programmers being better without college, I doubt it. Assuming they knew how to program in the first place, they'd probably be adequate for producing ad-hack simple applications, but would have neither the breadth of technical knowledge nor the independent learning skills required to be effective in a variety of roles, especially as the industry evolves, nor would they have the academic skills (critical thinking & evaluation, rigour, etc.) needed to make sound rational decisions. The latter, of course, would be fine assuming they'll never be moved into a decision-making role...
See also AcademicRelevance