A system is built with the purpose of building other systems to solve business problems.
It is constructed in such a way to make strong assumptions on data persistence, UI design/layout, object model/entity relationship, and technology used. Instead of using a real object model to represent domain concepts, everything is stored as generic meta data, pushing what would be compile time bugs into difficult to debug runtime bugs.
In the end, it may be easy to do the things it was specifically designed to do, but when a unique business problem presents itself, it's hard to adapt this rigid framework to solve this task, requiring the engineer to 'hack' around the system to achieve the goal.
Often this approach falls into the easy trap of "I'd rather build / re-create technology instead of focusing on the business requirements".
I agree with the spirit of this AntiPattern. However, I detect a slippery slope in the second paragraph and I think that some of the argument is self-contradictory. If you know the requirements beforehand and can construct a framework that makes satisfying them easy, then by all means do so. Don't throw away a good solution because it doesn't solve every problem. There will always be new problems because the number of cheeseburgers in the universe is increasing.
Maybe I'm being obtuse, but what's the connection with cheeseburgers?
Is this also potentially an example of AbstractionInversion, in the sense that authors of the EnterpriseApplicationConstructionSet are trying to create something generic to solve very specific (business-oriented) problems? Or is this simply a case of PrematureGeneralization?
In the age of autogenerated code and strong reflection, it is not strictly correct to suppose that this pattern automatically causes compile time bugs to become runtime bugs. If the system partially or fully leverages generators, then there is every reason to believe static analysis tools can verify some or all of the generated system's components.
See also: KitsAsCompromiseToBuyOrBuild