2 + 2 = 5, for sufficiently large values of 2
Mathematicians frequently use the phrase "for sufficiently large values of N". Often the audience is not quite aware just how large, 'sufficiently large' really is. It can be as large, or considerably larger than the number of fundamental particles (atoms, (baryons, leptons)) in the universe. Your guess at the exponent may be out by many orders of magnitude.
Sufficiently large can be really big in the way that space is not.
This habit also sometimes extends to pure mathematicians, who step outside their field of expertise and try to apply mathematics to a world where sufficiently big is often not realized.
This habit is also sometimes exemplified by non mathematicians who attempt to do something as simple as compute the Big O() of an algorithm and blithely assume that O(kN^2 + cN + z) is best described as an O(N^2) algorithm because for sufficiently large N. . . . . Sometimes, however cN + z >> kN^2 for all practical N. In truth, if c or z is SufficientlyLarge and k is SufficientlySmall, and N typically a little number, say < 50,000, the algorithm may best be approximated as O(N) or O(z).
Read more about BigOh, and why it may not be the measure you need.
I think you're being unnecessarily harsh. BigOh has it's place, and it's an important one. For those with sufficient experience it serves as a marker to ask the question "How large are we talking about?" For example, HeapSort is faster than BubbleSort for sufficiently large N - how large, when does the change-over occur?
An concrete example: The InverseAckermannFunction? (which occurs naturally in many GraphAlgorithm?s, e.g. UnionFind?) grows very slowly. So slowly, that for all computationally relevant values (number of particles in the universe) you can assume it to be constant (i.e. <6).
I think it was DonKnuth who said "log log log n <= 3". For comparison: exp(exp(exp(3))) > 10^229520860.
Of course, "sufficient experience" is a similarly moveable feast.
My favorite anecdote on this subject is sorting algorithms with multiple processors. With a number of processors on order of the number of things to be sorted, you can get sorting algorithms down to O(lg^2 n) time. 'Recently', a new algorithm was discovered which ran in O(lg n) time. However, the coefficients involved for real algorithms are like 1/2 and 40, meaning that while O(lg n) is better by asymptotic analysis, in the real world it's only better for sorting 2^80 particles or more, aka more particles than there are in the observable universe. However, custom dictates that when people publish papers on such sorting algorithms, they treat the O(lg n) as better.