This is a discussion based on material in GreatLispWar. I'm trying to present a somewhat more concrete model for training/knowledge and what the "market wants".
The skill/training-level issue can be viewed as resource allocation, AKA economics.
p = k1*s1 + k2*s2 + k3*s3...kn*sn
"p" is total productivity of a given employee. "kn" is knowledge about sn and "sn" is the Skill N, or more specifically, the importance of Skill n to Productivity (p). Ideally an organization and/or employee wants to optimize p, and one way is find the best allocation of k's (knowledge).
Remember, in the "deparmental" model (above), the skills will also include domain skills, and perhaps other IT skills besides development, such as networking, DBA, Photoshop, SmartPhone programming, etc. (I've counted at least 30 skills I can list, and probably many I've forgotten at the moment.)
For now, let's assume that the total time allocated to gaining Knowledge is fixed. Thus,
k1 + k2 + k3 ... kn = constantX
If one focuses on a specific skill, such as HOF's, let's say that's "s34", then one has to make the argument that some OTHER k should be reduced. Nobody has made a good argument that s34 is more important than s35 or s7 etc. Please try.
Now, one could argue that constantX should be increased and a given employee should spend time learning EVERYTHING more. But time spent learning takes away time spent on other things, such as actually doing the here-and-now job. And it may make the employee more valuable in the market place such that the org has to decided whether they want to pay for that level of knowledge. The best balancing point would make an interesting discussion, but I'm not sure that's the pro-lisp or pro-HOF perspective.
There's also the issue of using techniques that current or future colleagues may have trouble following.
--top
{This is nothing but speculative nonsense until it has backing from theoretical proof, empirical evidence, or both.}
It is nonsense. About the only thing that isn't is the bit about resource allocation being economics. The linear model relating knowledge and productivity is particularly laughable, since there's very little reason to believe it (and quite a bit not to).
Why are you complaining? I don't understand. DontComplainWithoutAlternatives. The only alternative I've seen is "in an ideal world programmers would fart code and all have PhD's and A+ GPA's because everybody gets a pony and there should be magic". The above is meant as a framework for discussion. Those who make hiring and tool decisions more or less use models similar to this to make their decisions. Even if you disagree with it (for unstated reasons), you should still know how the market works. It's good for you, like broccoli.
I don't have an OfficialCertifiedDoubleBlindPeerReviewedPublishedStudy but neither do you! My evidence is D- but yours is F- because it's grounded in an idealistic fantasy universe. I'm at least starting on the road to science by presenting draft models to start to explore. Your "should" model makes no empirical nor intellectual sense to me.
- [The claim is "there is very little reason to believe your model" and "your model is speculative nonsense" not "here's another model you should believe". Nobody needs to overturn or your theory because you haven't established it. Nobody needs to present alternatives or evidence for them. This isn't a contest for who can provide grade D+ bullshit. If you're providing a model, you need to show how it fits and predicts evidence. You claim to be providing a 'framework for discussion' but you go about it like a juvenile throwing turds into the swimming pool and seeing what floats, and asking others to do better. The only 'discussion' you'll ever receive is how utterly disgusted everyone is with you, at least from the few who walk by or decide to stick around.]
- These kinds of models and analysis are common in business and finance when weighing trade-offs. They are not meant to be perfect but are rather for "thumbnail" analysis. There is a continuum between experienced judgment alone (usually stated verbally), rough numeric models, and careful & detailed & rigorous models. The techniques shown here are in the middle of that spectrum. Often a high-end model is not practical and have costs approaching the project itself. It's like spending $5k to decide which $2k TV to buy for your house.
Let me try to simplify this a bit. Suppose a multi-hat developer/IT-worker has been told that they should learn Photoshop by their boss as a suggestion. They always read this wiki and see a suggestion that they should study and practice HOF's. This person already works long hours and has a family to help take care of. They only have enough time to pick one to study this month. How do you propose they "rationally" decide among the two? -t
You want an alternative? Study economics. (And you'll even find an OfficialCertifiedDoubleBlindPeerReviewedPublishedStudy or two.)
Too vague. Any specific studies you want to cite?
At this point, just about any introductory course would do.
In other words, you've got nothing beyond patronizing verbiage. I've taken 2 econ courses, by the way.
Then you've forgotten everything in them already. Take them again, and pay attention this time. For example, rather than starting with a linear relationship, start with the following. (Where y is a conglomeration of all the factors that affect productivity other than knowledge.)
p = f(k1, k2, k3, ..., kn, y).
I'm trying to keep the discussion focused at this point. We'll explore other factors as they come up. For discussion purposes, it's often best to keep certain things constant in a given section to explore a specific set of variables. The discussion then gradually works around to other factors. English is linear, for good or bad.
There's nothing wrong with that approach. I certainly could have said "p = f(k1, k2, k3, ..., kn)." The thing is, that now f itself is a function of y. I.e. "p = f(k1, k2, k3, ..., kn)" is just a shorthand for "p = f[y](k1, k2, k3, ..., kn)". (I'm using [] to indicate a subscript.) The two approaches are mathematically equivalent. (I don't know what the purpose of your last statement is. Nothing on this page is dependent on the linearity of English. It's also factually incorrect. English isn't even close to being linear.)
{"English is linear" makes almost as much sense as "dream is fish".}
A linear language is a language for which a linear grammar exists. A linear grammar is a context-free grammar with at most one non-terminal on the right hand side of it's productions. The phrase makes sense, but it's false.
{I'm almost 100% certain that's not what Top meant.}
That I can agree with.
You guys are sometimes too literal. Anyhow, the above says nothing about what's inside the function. I'm just trying to illustrate the trade-offs involved. When you say a person "should do/learn X", you also have to consider that perhaps they "should" also do A and B and C and D, etc. The typical developer/IT-worker has limited time and resources (or will at least spend only a limited time and resources on work-related stuff). If you make the claim that they should spend it on X, then you need to also explain clearly why they should do X instead of A, B, C, D, etc. That's a perfectly rational question, I don't know why you are bothered by it.
Then explain what you meant by it. Anyhow, your model doesn't actually illustrate the trade-offs involved, it's just something you made up and never connected to reality. That step is crucial.
How is it disconnected from reality? I invite you to produce a better model of the tradeoffs involved in allocating resources, such as learning time.
{How is it connected to reality?}
If you don't agree that people have multiple choices of what to spend their time on, I don't know what else to say.
There's an alternate model already on this page. But the issue has never been that there aren't choices, but that you've never shown a connection between your model and reality.
Where's the alternative? The big undefined function?
As far as connection to reality, do you agree there are time trade-offs?
Yes, it's the function. While we haven't exactly specified it (how could we since each situation will be different), it's also true that yours is likewise not completely specified. It does however, have the advantage of not assuming that the relationship between the cost of learning and its effect on productivity is linear. It also doesn't assume that the effects of learning different things are independent. There is plenty of reason to believe that both those assumptions are false.
Good! We are starting to begin to get somewhere. If linear is a bad approximation, what would you propose is a better approximation and why do you believe it to be so? In actuality, often it's a case of diminishing returns. For example, plumbing 101 will have a vastly stronger affect on a plumber's productivity than a hydraulics degree per time spent in school (ignoring tuition cost for now).
And I do believe that the affects of learning different things is approximately linear for most learning candidates. Learning PhotoShop and learning Ruby and Rails are not going to have a lot of interaction between each other, for example. We can deal with specific overlaps as we encounter them. We can still flesh out a general model without having it be accurate for every situation, correcting for those as encountered.
If I had the foresight to know exactly which knowledge would most increase my productivity, then perhaps an economic analysis would do me a lot of good. But, in practice, I often use knowledge in ways that I do not foresee. And if I did not have that knowledge, I would be unable to judge how much it costs me. (I have not studied past econ 101, but my impression is that opportunity costs are invisible in most economic theories.) Anyhow, a consequence of my imperfect foresight is a belief that people should never stop learning and discovering opportunities that they would otherwise have never recognized. LuckFavorsThePreparedMind?. Learning doesn't need to be entirely separate from doing; there is plenty space for a little experimentation in any development process. But we do need to find an effective balance between getting it done and learning new ways to do it.
I am not against learning. If you believe that was my point, then you have misunderstood my point. -t
I do not suggest you are against learning. But it is my impression that you do not actively pursue learning, and that you often seek and find thin excuses to remain ignorant and unprepared with respect to whatever topic you're arguing. This impression comes from pages such as BookStop, MindOverhaulEconomics, SelfStandingEvidence, and this one. The evidence for my impression is entirely circumstantial but thoroughly whelming.
That's because you have a simplistic (and wrong) cartoon version of me in your stubborn little head. I probably have an overly simplistic cartoon version of you in that I picture you as a professor or tutor of graduate school students who wants to justify the cost and time of extra education to the market place in order to feel better about your career and worth as a person. It's perhaps more complicated than that, but it's the image that keeps coming back based on your writings. It's my opinion you don't understand the market-place and human nature. There's nothing wrong with a personal endeavor to learn more of everything and anything. However, such suggestions or specific suggestions may not scale to the larger market-place. Platitudes like LuckFavorsThePreparedMind? does not tell us anything about the big picture, or at least how to allocate the trade-offs.
Of course my view of you is simplistic. You only present one small aspect of yourself on this forum. But my simplistic view is nonetheless sufficient to recognize you as someone who argues outside his competencies. And you regularly seem to demonstrate hypocrisy, for example: it isn't as though your economic theory will actually tell you how to allocate tradeoffs - in practice, you'll never have valid numbers to fill the variables in your equations.
Re: "Outside your competencies" - you don't prove your claims. You cannot objectively prove that I am wrong, you just claim claim claim. Any idiot can claim I'm wrong or that they are right. The victory goes to the articulator of evidence, not the claimer. Claims are cheap, penny a dozen. If you are so "competent", you fail to demonstrate it with real evidence. I can only conclude you are the one outside your competencies, such as the scientific process. Projection. You often mistake ArgumentByElegance for objective results improvement. Don't claim I'm stupid or wrong, prove I'm stupid or wrong with objective evidence. If you are incapable of such, then SHUT THE FUCK UP, arrogant bastard! EarnYourRightToInsultMe.
Comprehension of logic, proofs, and the scientific process are among those things I am convinced are outside your competencies. Psychology, too. Anyhow, if I was trying to convince someone that you're stupid, it surely wouldn't be you. That'd be stupid of me, analogous to trying to convince a rock that it's a rock. And I'm happy to LetTheReaderDecide from the ample circumstantial evidence you provide.
- It's probably clear to many readers that you piss on the scientific process and never produce anything close to a formal proof despite lofty claims, instead using bullshit such as ArgumentByElegance.
- I'm happy to LetTheReaderDecide that, too. Though, if I'm making an ArgumentByElegance, I'll not even try to convince my readers that it's scientific.
As far as "valid numbers", it's issues to discuss. We may never come to a conclusion, but we can look at the factors. you seem to be presenting a false dichotomy of OfficialCertifiedDoubleBlindPeerReviewedPublishedStudy versus nothing. If you believe that learning certain techniques improves one's market worth more than other things, I assume you have a reason why you believe such and can turn such thoughts into thoughtful descriptions and explanations and examples. I would hope that, but if you are a poor articulator of the grand powerful ideas in your head, then I guess we're stuck with mere claims and insults that the other guy is a "dummy".
I don't present a dichotomy. Platitudes, rules of thumb, best practices and patterns - such things are concise and practical. Unlike your formula.
Who determines this "best practices"? Where are they documented? You are dreaming again in Shouldville. Marketplace? It's rejected Lisp and related techniques in the long run, instead producing DomainSpecificLanguages.
Communities, big companies, standards bodies, marketplace, etc. all contribute to best practices. I'm not attempting to argue in favor of Lisp, though I would note that almost every mainstream language now supports some sort of near first-class block/lambda/etc. expression.
Your characterization of the content of "best practices" is suspect.
Then feel free to use someone else's characterization (cf. BestPractice, http://en.wikipedia.org/wiki/Best_practice). Regardless, it's a community thing, and subject to change over time.
Most of that appears to be written by consultants selling "methodologies". I don't see any real science, at least for IT. The work that would have to be done is something along the lines of taking 100 companies, randomly splitting them into two groups, 50 use technique X and 50 don't, and see which group of companies are the most profitable after 15 years. I see the medical community doing such with patient health, but not much in IT. SoftwareEngineering is still in the dark ages with regard to science.
- I don't see any real science from you, either. That was my point. It isn't as though your economic theory will actually tell you how to allocate tradeoffs - in practice, you'll never have valid numbers to fill the variables in your equations. Platitudes, rules of thumb, best practices and patterns - such things are concise and practical. Unlike your formula. When picking between things that aren't scientific, platitudes, patterns, and practices (regardless of whether 'best' or just good) are more effective and useful than invented formula and hand-waving claims about types or psychology. Your best contributions to this wiki have been when you've played the consultant, presenting non-scientific lists of rules or patterns or wisdom within your competency.
- It's more about the model and not the numbers at this point. I'm just trying to demonstrate that trade-offs exist to get you thinking and articulating about why YOU believe one approach is better than another beyond "I'm experienced and smart and I just say so". I admit my observations about human nature and "office behavior" are purely anecdotal, and if your observations differ, so be it. But you imply you know about some whimsical factor beyond that, some insight that I or others "don't get", hovering around comments about "outside your competency". Not "getting" something on a larger scale is also an economic factor to consider. You seem to base your economic assessment on the fact that programmers "should" make the effort know certain things (at the expense of others). Economics is usually not about "should", unless it relates to where consumers, owners, and workers see the most opportunity per effort/resources. If you are certain they are weighing their options wrong, you need to make a better case, such as measuring the benefits and showing the measuring being done. And best practices so far are NOT "concise", at least in terms of their weightings.
- The reason that programmers "should" make the effort to know certain things is precisely so they can see opportunity when it's nearby. The unprepared mind will not see opportunity, will not recognize it, and cannot weigh it. That's what I was saying above re LuckFavorsThePreparedMind?. You seem to believe that people can recognize opportunity without preparation, or that they can create adequate weightings without knowledge or experience. Because your model depends on such foresight, it is useless. Well, not quite useless: I'm sure you'll use it as another thin excuse to avoid learning.
- You still don't seem to be grasping my point. We are back at square one. I didn't say learning was bad. Where the hell did you get that? Arrrrg. Again, there's a basket of things to learn for many IT'ers. I perfectly agree people should always be looking in that basket for things to learn and improve on. But you haven't made the case that your pet topics in that basket are more important than others. Why should somebody read up on HOF's instead of learn smart-phone programming, for example? Yes, it would be nice if all IT workers had the time for everything in the basket, but that's not a realistic assumption. Should we put our family in a station wagon and drive them all over the cliff in Nevada to have more time to learn more of the basket topics??? -t
- [You appear to think learning is bad because you refer to academia in disparaging terms (e.g. MentalMasturbation), and complain every time someone points you to new sources of knowledge (e.g. BookStop). In fact, the rest of your rant simply appears to be an excuse not to learn.]
- You are just ticked because you found out you teach stuff nobody cares about.
- [I'm not in academia.]
- I am in academia. We teach using C#, Java, C++, PHP, SQL, Python and Javascript -- among a wide smattering of others -- which appear to be rather popular languages at the moment. My industry contacts tell me we "teach shit" that almost everybody cares about. A few months ago, I did receive an irate call from a company representative who was quite annoyed we didn't teach the entirety of our ComputerScience course using ActionScript, but that was an unusual exception -- she genuinely believed the future (i.e., 2013 and beyond) of mobile development was Flash.
- It's not really about the languages, but what is emphasized. Remember, there are certain techniques I tend to avoid because other developers may be confused by them. It's like writing: know your audience. Code is for humans (maintainers), not so much computers. You are not writing Finnegans Wake, but for Time Magazine (do they still exist?).
- Indeed, it's not really about the languages. That's why I wrote that "we teach using C#, Java, ..." rather than "we teach C#, Java, ..." ComputerScience education is about studying theory and concepts, whilst developing programming skill through practice. We emphasise making code as clear and as readable as possible, but we assume the audience has received the same level of education. The majority of developers our graduates will work with will have received the same level of education, so we're not going to advise developers to avoid (for example) HOFs and produce less readable (to the educated developer) code in order to suit a vanishing minority of sub-standard developers.
- Re: "The majority of developers our graduates will work with will have received the same level of education" -- Well, that's a unique position to be in. Most of the developers I've encountered over the years have a wide mix of backgrounds, many of them are there because of their domain knowledge and have limited or no college-level education. The pattern you describe is not something I see much of. I imagine one would only tend to see such at big IT shops such as Microsoft, HP, Google, etc. Most of my experience is in non-IT companies. They usually use the "department model" where each department finds a their own candidates rather than be managed by a centralized IT hiring group, or are too small to have a centralized IT hiring group, and most IT hiring is managed by an HR generalist(s). And many are skeptical of what may be considered "over educated" applicants because such tend to write code that's hard for follow-on developers to work on, and they tend to get bored easily, wanting to create some elaborate model or apply what may be seen as "esoteric" techniques. Maybe such is a good idea technically, but doesn't fit the their staffing patterns. Legible repetition is often valued over hard-to-read factoring or abstraction that may save code, for good or bad. Re-use and avoiding repetition is low on their priorities compared with having PlugCompatibleInterchangeableEngineers for reasons explained in GreatLispWar. Again, I've been ordered at multiple places not to "over-factor" functions. -t
- It's not a unique position to be in -- it's a typical position for a ComputerScience graduate. It appears the jobs you're doing are not typical for ComputerScience graduates or programmers in general. What you describe is typical for information systems or business analytics and business management graduates, who are expected to be knowledgeable about IT but aren't software developers in the usual sense, though some may develop some in-house applications on the "power user" side of end-user computing.
- I don't believe that statement to be accurate. However, I will agree that a ComputerScience degree is perhaps over-kill for many of the CBA programmers out there in terms of delivering a product. Remember, companies also value domain knowledge of their industry and people skills, something a CS graduate may lack at first. In my last job before the current, I was dinged on my performance evaluation for lack of domain knowledge, although given good ratings for my technical skills. That particular company liked to hire from within in order to have the domain knowledge (although different companies may tilt the other way), but had trouble getting sufficient domain knowledge and tech knowledge in one person.
- Your performance review confirms my assessment. Jobs that expect ComputerScience degrees know that any computer scientist or skilled programmer can acquire domain knowledge, but not every domain expert can be a computer scientist or skilled programmer.
- You are full of squishy smelly stuff. The domain knowledge is often not in textbook or classroom-presentation form for quick digestion. It comes from time and experience. Enough of your Fake Fantasy World. In this particular case the company was used to hiring within such that they were not used to outsiders who were new to their world. They had their own lingo and short-cut phrases. (It was a poorly-managed office such that if employee functions were better partitioned, then the domain-experts could handle more of the domain analysis role. Instead, they got lost in their pet projects. But the environment was what it was.)
- Your description further confirms my assessment: The jobs you do are not typical ComputerScience jobs. They are typical "information systems" or business analyst jobs. Note that I wrote that "any computer scientist or skilled programmer can acquire domain knowledge", which does not imply domain knowledge is "in textbook or classroom-presentation form" or that it is acquired through something other than time and experience. Given time and experience, a computer scientist or skilled programmer can develop extensive domain knowledge. Despite time and experience, most domain experts will not become computer scientists or skilled programmers. That's why there are so many jobs for educated computer scientists and skilled programmers.
- Perhaps we are entering into a LaynesLaw tangle over "computer science jobs". I would paycheck-safe-bet estimate that at least 98% of CBA code is not written by "scientists", and actual code in general is probably roughly in that range also.
- Note that I wrote "... or skilled programmers". I would expect any skilled programmer -- and by that, I mean anyone genuinely skilled at programming -- to understand and appreciate higher-order functions and the like. The majority of widely-distributed software is written by computer scientists -- and by that, I mean people with education in computer science -- and/or skilled programmers. "CBA", as you use the term, appears to refer to essentially an extension of end-user computing, and so I would guess that you're essentially right -- the majority of it is not written by computer scientists or skilled programmers.
- I meant in terms of code volume written, not code usage frequency. I agree that some of the core algorithms and perhaps code used by a good many systems and tools are written by those in close association with academia. Anyhow, for this discussion I'm considering a "typical" developer because we are looking at the bigger economic picture, not necessarily what may be considered "elite", which we don't need to define just yet.
- I'm referring to "typical" developers, both those who graduate with ComputerScience degrees, and those who graduate with information systems or business degrees or who have moved toward IT from some other discipline. You appear to work with the latter rather than the former.
- They are a mix. I'd gander roughly a third have no completed degree, a third "info systems" type of degrees, and a third CS degrees. Most are more concerned about learning the current IT fads that make the most money. If HOF's will get them money and chicks, they'll dig in, otherwise most will ignore them. I'm not condoning that, only reporting my observations.
- That means the majority are, to use my own words, "those who graduate with information systems or business degrees or who have moved toward IT from some other discipline." My assessment was correct. In many other software development job areas, the majority of developers have CS degrees.
- At least in the US, what percentage of programming time do you estimate is spent by those with at least a CS degree. In other words, if we took a snapshot right now of all programming work activity being done in the US, what percent would you estimate is being done via someone with a CS degree? Remember, this is counting time, not lines of code or code usage.
- Note that the behavior of the CS degree owners are pretty much the same as the other 2 groups mentioned: they follow the skills that seem most in demand: they follow the money. And the tools are generally shaped by "average" programmers. If you are selling something, you usually target the "average" or typical consumer, not the edges of the bell curve. (I will agree that open-source tools tend to target a more high-brow audience. But some do argue that this hurts their penetration.)
- {I imagine the vast majority of programming is done by people without a CS degree. I count scripts (CGI, HTML, JS, DOM, CSS, CGI and game scripts), spreadsheets, queries, non-trivial CLI (piping commands together), and other models as programming. This is as it should be. IMO, programming and UI should be fully unified (via LiveProgramming, GraphicalProgramming, etc.) perhaps based on a spreadsheet-inspired programming model. I believe in advanced programming in the sense that languages should grow along with their developers, applications, and fields; a language with a low ceiling will suck (which indirectly leads to GreenspunsTenthRuleOfProgramming). But a young programmer should be able to start concrete, i.e. do useful stuff without importing Monads or grokking HOFs or developing DSLs. Naturally, someone joining a project with advanced developers or codebases will need a little time to get familiar with the techniques used on that project, but even then a good language should provide a nice concrete 'view' (e.g. with ProgressiveDisclosure and ability to inline the HOFs and such). Don't confuse "advanced programming" with elitism.}
- I don't see that most organizations value "advanced programming". They even frown on it fairly often because it limits the pool of maintainers. Like I said in GreatLispWar, organizations lean toward valuing consistency (availability of maintenance) over average productivity. The productivity of advanced programming does not appear to be strong enough to overcome these factors:
- Maintenance availability
- Domain knowledge
- People skills
- Salary costs
- Misuse of advanced techniques for job security games etc. (risk)
- One point of view is that organizations are just "too stupid" to recognize the value of productivity from advanced techniques; the other is that the above factors (at least) are stronger than the productivity gains. I'm sure you will disagree. -t
- You're using the term "organizations" in a generic, all-encompassing sense when it's clear that you're only referring to some organisations. There are many, many, many organisations that value advanced programming, and regard it as a competitive advantage. This applies to small, local, non-IT organisations as much as it does the obvious Googles and Microsofts. I know of a small shipping and logistics firm, for example, that hires computer scientists because it considers its IT infrastructure to be business-critical and a competitive advantage. Organisations that don't regard IT infrastructure to be business-critical or a competitive advantage will almost certainly have the view you've described. The only open question is what percentage of businesses across the world are the former as opposed to the latter? I doubt there is any simple statistic that will answer it.
- I mean it in the sense of "average" or "typical". And we will indeed disagree over the frequency and let the lack of objective surveys be the end-point for now. Note that if there are enough logistics and shipping firms around, then eventually pre-packaged solutions will be available to reduce number of programmers devoted to the core of automated optimization algorithms, just like the 13 employee shift scheduling products we found online, but still a fairly large group devoted to customization, such as "side reports". Thus, there is a "scaling suspicion" that hangs over your claims.
- What makes you think your view of "organizations" is "average" or "typical"? What is a "scaling suspicion"? Anyway, many pre-packaged solutions are already available in the field. Companies like the one I described believe they maintain a competitive advantage specifically by identifying what pre-packaged solutions are available -- and these they consider to be a form of competitor -- and creating better solutions for themselves and their clients.
- Okay, but that's still a rather narrow niche because most shipping companies will probably use the pre-packaged solutions because they are in the shipping business, not the software business. Some will choose technology as their edge and others use container material research, others schmoozing, marketing, etc. as their method to keep up or get an edge.
- A few years ago, it was a narrow niche. Now that such companies easily equal each other on containers, schmoozing, marketing, etc. -- i.e., the relatively unskilled areas -- they fight it out in the area where they can most differentiate, which is IT. I don't know if you've noticed, but the whole IT/Web/Internet/Software/mobile/social-networking thing is kind of big at the moment. It's why excellent programmers -- those capable of advanced programming -- are well-paid and in demand.
- Like I analogized elsewhere, it's similar to the the C++ market when GUI's first came of a age two decades ago. GUI's were in demand and that's where all the "elite" programmers went. Then VB, Delphi, Paradox etc. came along and reduced the skill needed to make most GUI's and it became a more or less generic skill via mostly drag-and-drop widgets. The C++ crowd yammered on about reuse and abstraction, but it turned out that VB made reinventing the wheel faster than the reuse guru's could reuse because the EightyTwentyRule kept byting them in the ass. COPY AND PASTE WON for good or bad. A similar thing happened to websites that at first needed very specific knowledge of HTTP and session tracking techniques. Then web-oriented server-side languages and libraries came along that pre-packaged much of this. The cycle has been slower for web GUI's, but I'm betting something will eventually give and a VB-like commoditizer will come along. In theory it should NOT require a rocket scientist to put a TableBrowser and combo-box on the same screen/page without crashing 20% of browsers. The stupid HtmlStack just makes easy things hard. But even now specialization is used such that a mid-level programmer can use mostly drag-and-drop of say Spry Widgets via Dreamweaver to get a good many UI idioms, and then a "Javascript whiz" comes in to help tune or add in the really sticky, buggy, or advanced stuff. We'll see how it all plays out. Flash was supposed to be such a tool, but stumbled. The behavior of GUI's is pretty much known and has been established for 20+ years, it's just that attempts to pre-package these idioms "for the masses" has faltered of late. It may be that open-source is less skilled at packaging idioms "for the masses" than commercial companies because they like to play with "clever" syntax resembling a cat swallowing Lisp and Perl and then coughing up a fur-ball: $()[{]{($.().((%${$}(){[:]|&.${}.())
- Indeed, what is currently considered cutting or bleeding edge will become commoditized whilst excellent programmers will, as always, surf the edge. Just behind the edge is the commodity mainstream, where it's not that "pre-packaging of idioms" has stumbled, it's that the nature of the commodity mainstream itself has changed. Software project managers no longer consider it worthwhile for "a mid-level programmer [to] mostly drag-and-drop", because mainstream development has advanced to the point that today's mid-level programmers are comfortable using HTML/CSS/Javascript and related tools like JQuery. These tools are the result of a "VB-like commoditizer", but the capability bar has been raised -- they can do far more than limited drag-n-drop environments. In short, today's mid-level programming is yesterday's advanced programming, and today's mid-level developer is fully capable of using lambdas, higher-order functions, event-driven I/O, meta-programming and the like. Today's advanced programmer is comfortable with functional programming, advanced type systems, monads, and so on, but these too will become mainstream as the cutting edge advances.
- That's not what I see. The average developer is not better educated than 20 years ago. And most orgs were comfortable with the capabilities of VB, Delphi, Paradox etc. for niche or internal projects. It was the pain of deploying them that made web apps popular, NOT the alleged capability of the HtmlStack. I've sat in many meetings where the trade-offs were discussed. To handle the screwier HtmlStack, there tends to be a division of labor between "front-end" and "back-end" specialists. For example, the back-end developers may be well versed in databases and server-side web programming while the front-enders are not. The VB (desktop) programmer used to handle both the front end and the back end. And many internal web apps use plane-jane-style server-side-centric programming rather than trying to mirror the desktop GUI closely. CRUD is clunkier than say Paradox for such server-side web apps, but if they serve relatively few users, then orgs live with it. If it's a wide-use internal app, then the front-end specialist(s) is called in to give it a better GUI. Deployment of desktop apps was expensive for orgs, especially with DllHell and managing software updates. It's cheaper to spend more on HtmlStack GUI development (and/or live with clunky interfaces) to cut deployment costs compared with the desktop era. -t
- {Regardless of whether or not they are 'better educated', average developers today regularly use HOFs, metaprogramming, reactive programming, and other (historically) "advanced programming" concepts on a regular basis - for purposes such as event handling, asynchronous programming, GL shaders, GUI frameworks, etc.. They don't need to understand HOFs to use them, but using certainly helps them understand. In addition to VB, LabView and MatLab are other programming tools widely used by people who aren't professionally programmers. Are you an average programmer, TopMind? What makes you believe you're on par?}
- If you focus on a specialty, you eventually memorize the patterns to use things even if the interfaces are goofy and poorly-thought-out. Similarly, the A/C repair man learns from experience and tips from colleagues that when Bradford model 34726 makes moose sounds, they change dial X to be at 820. They may not know why to set it to 820, they just know it works. There are a good many programmers who quickly memorize patterns of code and learn how to use those patterns without necessarily knowing what's happening under the hood. I'm not sure what your point is about LabView. I consider it a domain-specific programming language and those who program with it fairly extensively are indeed "professional programmers".
- {People who use LabView are often engineers, but (in my experience) are rarely trained or employed as programmers. I don't see how anything else you said is relevant. With respect to raising the bar, it doesn't matter why people learn techniques that are historically considered advanced, nor even that they understand the theory behind them.}
- Most engineers take at least one programming class, usually C, Fortran, or Java if I'm not mistaking. They are usually familiar with conditionals, loops, and functions. I used to work at a company that had 3 LabView programmers, all engineers, and they said they took C in college. And you don't need HOF's for GUI's, as VB and Delphi showed. HOF's can help in writing terse GUI code, but terse is not always better maintenance-wise. Stuffing too many activities in one statement can be difficult to read, and is one of the reasons why Lisp has failed to catch on. Besides, much of the maintenance was done with point-and-click IDE's for VB and Delphi, not code sifting. Again, JS as a de-facto client standard has forced development into places it wouldn't normally go. In a similar but smaller fashion, AutoCad has resulted in many Lisp programmers, but that would probably not be the language of choice for most. AutoCad now offers other languages.
- {By the time I graduated high school, twenty years ago, my school required one programming class to graduate. I learned about loops and conditionals and functions way back in a mandatory elementary school programming class - good old LogoLanguage after booting the computer from a 5.25" floppy. I do not believe that taking one programming class makes anyone a professional programmer. Also, I never suggested you need HOFs for GUIs, so I'm not sure what your point is there.}
- I consider "professional programmer" anybody who gets payed "typical programmer wages" (at least) to spend at least about a third of their time programming. Just because one is using a DomainSpecificLanguage or programming tool doesn't make them a "non-professional". Their expertise is the melding of domain knowledge and programming. It may not be expertise in advanced programming techniques, but it is still a form of expertise that involves programming and a potentially very valuable skill. A similar situation is a "programmer/analyst" who excels at communication with the customers/users to extract their needs by following subtle cues and asking smart but understandable questions. It is a valued skill even though it may not involve "advanced programming". One can save a lot of time if they can find out what's needed without excessive trial-and-error. There are different ways to be "good" and valuable.
- There are certainly numerous "professional programmer" roles writing reports, creating scripts, developing spreadsheet macros, hacking VBA code and so on. The implication of your arguments is that a low level of programming skill is all that's required in general, or that programmers of more advanced capability are rare. Neither is the case. As a result of typical ComputerScience education, today's mid-level programmers are quite capable of using (at least) Java or C#, Python and Javascript/HTML/CSS -- whether they use them daily or not -- and are comfortable with lambdas and higher-order functions and don't give them a second thought, as they're taught along with variables, expressions, loops and functions in introductory programming classes.
- The pool of programmers I encounter come from a wide variety of backgrounds such that orgs have to cater to a kind of middle ground. Advanced techniques don't offer any significant demonstratable advantage for custom apps actually found in the wild such that they are often shunned to avoid making fungible team maintenance more difficult or unpredictable. That's what I see. If I'm blind or looking at the world all wrong and cock-eyed, I apologize that I am not aware of my disease; it's what I fucking see around me. If your observations differ, so be it. You report, I report, and we move on. Further, those who seem most adept at advanced techniques often have the worse people and team skills, and this bothers orgs.
- In IT departments that regard software development as a central focus, the "middle ground" you describe is the same as the "mid-level programmers" I described, i.e., people who are "quite capable of using (at least) Java or C#, Python and Javascript/HTML/CSS -- whether they use them daily or not -- and are comfortable with lambdas and higher-order functions and don't give them a second thought, as they're taught along with variables, expressions, loops and functions in introductory programming classes." If that's not the sort of people you work with, it's probably because you work in a branch of IT that has largely evolved from end-user computing. Its focus is not on software development, but on solving business problems using IT. Software development is not a central focus, but a means to an end.
- Well, at least the last two sentences starting with "Its focus is not..." are spot on. The "evolved from end-user programming" is flat wrong though. There has always been a large group of mid-level programmers and their shops who don't give a care about lisp-isms in the 25 or so years I've been in field. If there was such "evolution", it finished before I arrived on the scene, making it so long ago as to be irrelevant.
- "Evolved from end-user computing" is 100% accurate (once you've quoted it correctly.) IT departments where the majority of developers are not from a ComputerScience background are almost invariably an outgrowth of spreadsheet and ad-hoc report development, often combined with in-house IT infrastructure support. The focus in such organisations is on producing reports -- where spreadsheets are often considered a form of "report" -- rather than producing applications.
- Keep in mind that most organizations are NOT primarily in the software development business, meaning they don't normally sell software itself. In fact I'd estimate 95% to 99% are not. If so, why is this characteristic the exception to the rule instead of the other way around? Your proportions don't make sense.
- Most organisations are not primarily in the software development business, but many cutting-edge IT departments are in the software development "business" because they know custom-built in-house software can be a key ingredient in giving their organisations a competitive advantage.
- I've worked in or consulted at approximately 30 companies/orgs in my career so far. I generally don't see such outside of the main line of business. This is because plumbers, let's say, eventually realize they know shit about software and software management and will not be able to compete with dedicated plumbing software vendors. Most managers and staff at a company/org that makes foo came up through the company making foo or working on foo directly. They think in foo. If they had talent in BOTH foo and software development, they'd likely go work at a foo software vendor: those who specialize in domain-specific software for foo. I have seen a fair number of cases where experimental AI and/or advanced statistical analysis techniques are attempted to find customer patterns, billing anomalies, etc. but it's usually using pre-packaged AI or statistical software or an AI/stat consultant is used and usually managed or performed by somebody with a degree in statistics or math. Most stat packages have a basic programming language or hookups such that a programmer may be called in to automate steps being done manually. But the bulk of the processing is done via the pre-packaged steps such that this code mostly just manages where the inputs and outputs come and go, not the actual computations. It's "glue code".
- Glue code is acceptable to companies that want to get the job done, but not sufficient for companies that want to get the job done better than everyone else.
- Again, the problem is that it's difficult to find manager who can manage and understand two aspects well: the domain and software dev.
- No, such managers are easy to find, but they cost more and get snapped up by good companies and organisations.
- So you say.
- You've suggested that your niche is highly-optimized software for companies that try to get a strategic edge over "generic" domain software. They may not worry much about a reliable supply of maintenance developers because they decided up front and are determined to pay top dollar to find a good fit for their project. "Pesky salary budgets" are not their concern for such positions. Perhaps this niche biases you toward thinking there are more companies like that. Or, maybe my background predisposes me to the opposite kinds of companies for whatever reason, where programmer salaries and programmer staff "supplies" are treated no different than toilet-paper procurement and budgets by Dilbertian managers. Advanced programming techniques are frowned upon because they make it hard to have PlugCompatibleInterchangeableEngineers that are as easy to replace as toilet paper rolls in standardized restroom rollers. It's hard to know what perspective biases are in place for any given person without objective surveys.
- In many organisations, advanced programming techniques are seen as a competitive advantage.
- But they don't know how to manage such, per above. If the org is not careful, charlatans will sneak in because the more advanced and cutting edge it is, the more room there is for bullshit. It's EasierToVerifyLowBrow? and mid-brow.
- Nonsense. Advanced programming techniques aren't arcane incantations understood only by the anointed few, they're just code. The difference between incomprehension and comprehension of feature <x>, by any educated programmer, is the effort taken to read an article or two and some practice.
- You are assuming they know, they care, that they wish to tell management, and that management will believe them (versus "trashing" competitors' code).
- You allude to dysfunctional organisations. Do you believe these to be the norm, or is it possible that your own interpersonal skills are colouring your experiences and perception? On various occasions, I have noted that person A -- socially adept, diplomatic, supportive and positive -- regards an organisation as an excellent place to work whilst person B -- socially "difficult" and of a pessimistic nature (these often go together, not surprisingly) -- regards the same (and probably every) organisation as dire. Which are you?
- You seem to be trying to reflect this back into me. Let me state it this way: humans are primarily social animals. They are not Vulcans. Decisions are not made in a rational science-influenced way, but via a mostly ad-hoc social web of push-and-pull of egos and personal connections. Personally, I'd like to see a more democratic and open argument-by-merit decision process, but that's not the way most organizations function, for good or bad. I'll admit it frustrates me at times. I'd like to see more Vulcan's and less Ferrengi's. We'll LetTheReaderDecide which "office" model best fits their observations or particular organization.
- I am reflecting this back onto you because I believe the nature of your relationship with the organisations in which you've worked may provide additional insight.
- I've done a self assessment and don't see that the "problem is in my head". I don't know what else to say. Bias can work both ways, I'd note. Dale Carnegie (HowToWinFriendsAndInfluencePeople) and Scott Adams (Dilbert) have noticed pretty much the same thing about human nature in Office Land (but present it from different perspectives). Perhaps there are those who bitch about the "irrational stupidity" of humanity, like me, and those who accept it as the way things are and learn to live within its rules, and even master its rules. Maybe there are pockets of rationality out there, but they are rare in my observation. I've only worked at one place that came close, and it all went to hell when a bigger company purchased it and filled it with their clueless PointyHairedBosses. Maybe you have the rare nose to sniff out rational companies, but assumptions of having that ability doesn't scale into general advice.
- You still haven't answered my questions.
- I didn't come here to talk about me and I don't want to hear details about your life either unless they reflect a specific issue at hand. Everybody's personality and skills probably lead them to certain "kinds" of organizations and/or tasks and thus everybody is going be biased; that is, their experience is influenced by their personality and skills. But I do look around the organizations I am in and talk to other developers to get their perspective to try to widen my viewpoint. Whether that is sufficient to counter bias, I don't know. Unless reincarnation is true, we all only live one life.
- Thank you. You have now answered my questions.
- I thought I said that already somewhere.
- The jobs an individual gets reflect his inclinations, and the jobs an individual does shape his inclinations. We are no doubt speaking truthfully about what we see; unfortunately we can't see everything. An interesting piece of research would be to do a survey that correlates programming techniques/languages with business characteristics, to obtain an overall picture of the frequency and pattern of language technique use. I'll see if I can get a student to do it.
- (Responding to TopMind): People often avoid things that they know are good for them, like exercise, eating their vegetables, tracking their spending, or entertaining new ideas. I've never suggested you've said learning is bad. I've claimed you avoid learning, and you find thin excuses to do so. There's a huge difference. And with regards to your basket: I imagine you'd find plenty more time to learn your topics if you didn't waste hours and hours and hours and hours and hours and hours arguing on this wiki and elsewhere. What's really pathetic is that you don't even learn enough about a topic to present a sound argument or grok other people's arguments, so you just bullshit your way then tantrum about respect when people point it out and your hours and hours and hours are truly wasted (unless you get your kicks from arguing).
- Projection. YOU are the one not grokking (or skim reading) the argument, hypocrite. This is not about me, but the general population of developers and how they allocate their learning time. YOU bastards made it into a personal flamewar instead of focusing on the population. Congratulations. It COULD have been an interesting topic.
- Nice tantrum. And if you honestly believe this page isn't about you, you're only fooling yourself.
- No, you just want it to be because sticky real-world trade-offs make you uncomfortable; you want to hide in ShouldLand?.
- Listen, we both have a negative stereotypes or suspicions of each others' motives. Repeating it all over does not help things. I'm just trying to "get in your mind" to see why you recommend the allocation of training time that you do when weighed against all the typical alternatives. You believe it strongly, so surely you have a computation in your mind that makes you think such.
- I expect you have a terrible estimate of where I'd recommend training. E.g. I'd probably ask for 5% of time and effort (~2 hrs per 40hr week) be put towards training higher level concepts (such as HOFs, state machines, strategy patterns, behavior trees, REST concepts, etc.) that are not directly related to the task at hand (albeit, in parallel with other training, not in sequence with it). That's enough, over a few years, to know what's available, to make sound judgements, to avoid over-specialization, to recognize opportunities and avoid unnecessary efforts. If you, TopMind, spent even half the time you currently spend arguing in defense of your ignorance into correcting your ignorance, you'd waste much less of yours and others' time. And while you're good at pretending to be sincere, you certainly did not create this page with an open mind, willing to be convinced that you should spend more time learning.
- As I stated elsewhere, we are expected to consider other and future maintainers. Even if I personally gain such knowledge, expecting other readers and shops to accept my code with such may not fly. Developer A may allocate 5%, developer B 0%, developer C 2%, etc. The chance of the next maintainer being another 5%-er is fairly low. You are again failing to scale your suggestions to general developers, over-focusing on me me me me. I already skip or scale back certain techniques I like for maintenance/readability purposes. More knowledge won't change this. It's not a hatred of HOF's; it's knowing the Romans.
- I also make judgements about what will be easier to maintain, but I think you underestimate people when you pretend most of them are 0%-ers like yourself. "Knowing the Romans" - you're delusional if you imagine you have more knowledge of them than you have of HOFs.
- We both think each other are biased bastards. What's new.
- Nothing, apparently. Your bullshit hasn't changed for at least a decade. That's also why I know this page is about you (no matter your lies otherwise).
- Projection. You keep implying I'm objectively wrong, but almost never prove it via something close to formal proofs because you think in terms of ArgumentByElegance instead of something more measurable, practical, or ignore WetWare factors. Anyhow, we can LetTheReaderDecide the nature and skill-level stability of their shop's supply of developers over time and the risk profile of tying the shop's software to certain techniques. Hopefully we've given them some ideas of what to look for even if we haven't answered or agreed on the primary question. (Keep in mind the reader might not have control over hiring decisions, but have some control over coding practices.) Maybe something is biasing me, or I'm just missing something, but I look out and see a sea of chaotic Dilbertness out there. If the reader looks around and sees Vulcans, then they follow your advice. If they look around and see characters from Dilbert, then they follow my advice.
- Bigfoot spilled the milk. If you think I'm wrong, prove me objectively wrong, using formal proofs and strong SelfStandingEvidence. It's really about WetWare anyway, whatever spilled the milk I can twist my mind and the scenario just enough to call it Bigfoot. - TopMind-like bullshit for every possible argument.
- My evidence is weak, but so is yours. Both our ball teams have 3 cripples, a midget, and a blind dude (all with an urge to walk into a bar :-) It is about WetWare because computers don't give a shit about code. They don't care whether they are running assembler, Ruby, or BrainFsck; they just execute it like the dumb savants they are. (Performance matters some, but it's secondary in most situations.) Thus, code design is either for nobody or for the human mind.
{So you've been saying for, what, two decades now? For that to be a sustainable argument, you need to demonstrate -- via logical proof or empirical evidence -- that most characterisations are "written by consultants selling 'methodologies'". To be considered compelling or convincing, you'll need to establish rigorous definitions for both "consultant" and "methodology", and then perform a comprehensive literature survey. This is not unreasonable -- it's essentially the same process followed by researchers, and students doing MSc. (and sometimes undergraduate) dissertations and theses. If you feel
SoftwareEngineering is still in the dark ages, then do something about it: contribute some genuine research and publish.}
I don't have the money and staff to do real studies. But the default is not that the methodology consultants are correct. The default is "unknown" or nil. I don't need to "establish rigorous definitions" of consultants or anybody for that to remain true. Lack of evidence for B is not necessarily evidence for A. If I don't have strong evidence that Bigfoot knocked over the milk, that doesn't mean that a flying saucer knocked it over.
{Real studies can be done with neither money nor staff. In academic terms, this is called "self-funded research", but it simply means you've taken a little time and made a little effort all by yourself.}
Without some credentials or reviewable history of such studies, it's difficult to just walk into companies to get such info, especially with somebody like me who lacks charm (ya don't say?). If it's so easy, I suggest you give it a try.
Here's a somewhat extreme example of the "don't target too high" pattern.
I've seen an org with 6 people with enough skill to edit basic HTML. 4 of these 6 are graphics experts who also know HTML, and the other 2 are power users who picked up HTML along the way. But there are only 2 "programmers" with knowledge of scripting or programmed templating.
For this reason the organization is reluctant to use content factoring techniques, such as scripting to avoid repetition. If there is an urgent change request, none of the 2 programmers might be available to make the changes. (Lack of availability problems have happened more than once.) If the org sticks with direct HTML, along with copy-and-paste repetition of many page elements and page patterns, then the chance of finding an HTML editor on staff is much higher since there are 6.
This org values the ability to make quick changes for specific pages but doesn't want to hire more programmers. Because the 6 HTML-only staff members are in scattered departments, training them in abstraction techniques is not a viable option, for at least political reasons related to the scattering.
If I was in charge of the whole org, I may shuffle around staff and position titles to better handle this situation, but I am not. Those calling the shots are not so technically savvy and that's not likely to change; thus they want PlugCompatibleInterchangeableEngineers (or experts) even if that means less use of abstraction.
In general, higher abstraction requires more expensive or less available experts such that companies tend to design staffing patterns around the lowest common denominator.
Twenty years ago, some companies were reluctant to use computers and at best would grudgingly outsource IT work to their accounting firm. Now it's exceptionally rare to find a viable firm without any computerisation, but the above is certainly the modern equivalent to the techno-Luddites of two decades past.
Not sure if those are equivalent. I agree the above firm should bite the bullet and get a real WCMS, but that would require a staffing increase, which they may not be ready to do. Maybe in a future boom they may finally think, "Let's see, now that we have money, what can we fix up that's been messy and neglected?".
See also: MindOverhaulEconomics, IfFooIsSoGreatHowComeYouAreNotRich, ArgumentByElegance, StaffingEconomicsVersusTheoreticalElegance, AreWeBiasedTowardLaborIntensive, WebStoresDiscussion, BlubParadox, HighDisciplineMethodology
CategoryEconomics, CategoryEmployment