In these heated debates on StaffingEconomicsVersusTheoreticalElegance, the issue of "grokking" ability and education comes into play.
We can perhaps observe the tradeoffs more clearly with UI's. CUI's (command input) are arguably more efficient per time and body movement for a good many tasks, if properly tuned. However, GUI's are still the most common choice because they are considered easier to grok in the short and medium term. If properly trained and educated, users could be using CUI's and be more productive. However, this training is usually skipped and GUI's used instead. Thus, "technical" efficiency is outweighed by human grokkability and the cost of obtaining and maintaining staff capable of using CUI's to their potential.
While an individual may be available who has mastered CUI's and would be very productive, if a given application only had a CUI because of this person, they may leave the company or get promoted and the follow-on employee may not be so adept at CUI's. Thus, UI designers cannot realistically cater to the high-end user, but rather an average, or probably around the 75% or higher percentile since pissing off 25% of customers is usually not a good business model, and can be disruptive to business if somebody who can otherwise handle other tasks gets stumped by your product.
Why should writing code be any different? Ideally all coders would be well trained on high abstraction and be using Lisp or Lisp-like languages. The quantity of code that would have to be typed, changed, and read would be smaller. However, the industry found that depending on ideal abstraction and related training to be completed doesn't pan out in practice, and that there is a Goldilocks zone between training/employee costs and ideal code. (See GreatLispWar and IfFooIsSoGreatHowComeYouAreNotRich).