A postulate :
It is easier to code bugs in a language, or a framework, which unnecessarily multiplies entities. Conversely, simpler languages or frameworks reduce the number of ways things can go wrong.
Sadly, we seem to love having a plethora of ways we can shoot ourselves in the foot. Only a small number of cultures appear to make a virtue of poverty; SmallTalk, LispLanguage...
See OccamsRazor.
Discussion
EiffelLanguage? (I wouldn't consider Eiffel a minimal language - xUnit seems to prove that YouArentGonnaNeedIt applies to built-in assert statements.)
Unit testing and DesignByContract are orthogonal, not competing, technologies. Unit tests exercise the contract defined by the interface of a class. If you CodeUnitTestFirst then tests also specify the contract of a class. However, those contracts are hidden in the code of the test cases and intermixed with code to set up and tear down the context of the test. Language-level DBC moves the contracts to the interface of the class, where they belong, leaving unit tests to exercise those contracts to check that they are correct. It's a matter of clean SeparationOfConcerns.
LispLanguage... CallWithCurrentContinuation is pretty good at removing feet. It's an example of taking simplicity so far that it wraps around to complexity again.
You just move the complexity into the arena of many interacting parts, which can in many ways be far more complex.
...so it is better to have the complexity built-in? I respectfully disagree, on the basis of "making simple things simple, and complex things possible".
Built in where? Having something available in the language is no more built in than if it's available in a library. Smalltalk is simple. Its library isn't. You don't really see one without the other. The separation is good for the flexibility, but the separation is also somewhat of an illusion. And if a capability isn't built in it can be almost impossible to add it in later. It's not possible to include Eiffel-like DBC in C++. There are plenty of hacks to get close, but it's not the same.
I had a second look at the UnLambdaLanguage page, and while I'll still stand by the postulate, it seems obvious that there is such a thing as going too far in the direction of simplicity.
The idea is that there is a "sweet spot" in complexity - to get there you remove anything that doesn't help - or helps only marginally - in expressing yourself more clearly.
Reducing redundancy in (human or programming) languages also decreases their robustness against errors. E.g. if there is one and only one unique sequence of characters for coding up e.g. wiki in a given language, then the tiniest error causes the whole endeavor to fail. This does make it rather easy to discover that the program is wrong though!
InformationTheory tells us exactly this: that redundant information can be used to make error-correcting code, and there is actually an optimal level of redundancy IIRC (in the sense that the code only needs to be redundant by so much to allow for correcting all 1 symbol errors, and so on). Human languages which have been around for a long time and are used a lot tend to evolve to allow lots of ways to express something grammatically.
LarryWall has described PerlLanguage as evolving this way on many occasions and I've often heard it criticised as being able to execute line noise....and yes I find Perl MUCH harder to debug than BondageAndDisciplineLanguages that have very few ways of doing things. Anyway - I can't help but feel there is some information theoretic backing for the postulate made on this page.
This [complexity or redundancy?] is fine for writing code, but maintenance of a system is much more expensive than its initial construction. Languages with a lot of redundancy and choice allow different programmers to write the same construct in different ways. This makes it harder for other programmers to read, understand and change their code. Eventually code that is modified by several different programmers will use many different constructs for doing the same thing, making it even harder to read. This is why CodingStandards are defined for languages with redundancy or choice.
You'll notice that most CodingStandards are for arranging whitespace around code. This is because not many languages have a requirement for spacing code that is simple to perform and easy to maintain. The language author shouldn't make presumptions about the layout of constructs, given how implementation details like variable and method names can change drastically between systems.
Isn't this the entire mindset behind the Pythonic way of 'there should only be one right way to perform X'? At least for trivial tasks, it makes a lot of sense and keeps trisomic programmers of the 21st degree from re-inventing the square wheel. Obviously this doesn't translate to lower-level languages very well, but it seems to be pretty swell in Python, converting the grotesque for(i = 0 ; i < sizeof(x) ; i++) into the pleasantly semantic for x in y: