Math Is Hard

See also BarbiePrinciple.

Yup. MathIsHard. It just Is.

Some things are just plain hard.


EditHint: It seems, that some serious cleanup is in place. I suggest the following:


This may be horribly naive, or the thing may already have been done, but maybe what's needed is a MathPatternLanguage?


This page is in desperate need of tidying. There has been considerable "speaking-past-each-other" all over it. Basically, it seems to boil down to those who think math is arithmetic and school/college-style algebra, and those who have done math degrees and know that there's more to it, such as analysis, algebraic topology, and number theory, just to name a few.

In the context of the original, tongue-in-cheek quotation, the first point of view is reasonable. It's also reasonable to continue to try to educate people that arithmetic isn't all there is to math. It's not clear that this is the right page to discuss some of these points.

Please, can someone start to clean this up a little? I would do so, but given my bias I'm pretty sure I'm not the right person.


Math is hard only for those who don't understand it. Thank you for the tautology of the day When I grasped the concept of equality, it opened up the world of algebra like a key opening a door into the Secret Garden of Math.

If you have an equation, you can do the same thing to each side of the equation, and you get a new equation.

For example:

3x + 4 = 10 (solve for x)

We want to get x on the left, and a single number (the answer) on the right. So, let's "do something" to both sides of the equation that brings us closer to our goal.

Subtract 4 from each side (and simplify).

 3x + 4 - 4 = 10 - 4
  3x = 6
Now, divide each side by 3.

 3x / 3 = 6 / 3
 x = 2
Now, that wasn't so hard, was it? You can do it, too!

-- EdPoor ($50/hour Math SAT coach)

Unfortunate example, Ed., unless you are arguing that arithmetic is not hard. The above has little to do with mathematics. This common misconception is, I believe, one of the primary reasons that many people think "math is hard" - they have never actually seen any (or if they did it was well disguised).

See GoldenRuleOfAlgebra

Ed's example is easy because it deals with an easy linear equation. A quadratic equation can also be solved fairly easily, but requires the introduction of a new concept - complex numbers. With a cubic equation, things change. Finding a way to solve it seems to require some luck, rather than just rational thought.

The existence of a solution doesn't rule out luck. Mathematicians searched for a solution formula for the general cubic equation for a long time before some had success. The successes seemed to be achieved by having the good luck to stumble across the form of the solution. Once the form had effectively been stumbled upon or guessed, completing the details became quite easy. Note that I did not specify the solution had to have a simple algebraic form. Some cubic equations can be neatly solved using trigonometric functions. In general, such solutions are reals that can't be expressed using radicals without introducing complex numbers. The cube root of a complex number isn't quite as "elementary" as the cube root of a real.

I'm not sure what the ending comment in your first paragraph refers to. Can you give an example of such a "tool at a higher level than radicals"? Or are trigonometric functions already such a needed tool when it comes to finding the cube root of an arbitrary complex number?

The ancient problem was to find methods for solving polynomials symbolically/in closed form (not numerically) using traditional geometry (straight edge and compass) and equivalent tools in algebra (variables, arithmetic operations, and radicals/roots) and nothing more. In the 19th century it was finally proven that doing so is impossible in the general case for polynomials of degree 5 and higher.

Arbitrary polynomials can in fact be solved in closed form, in some sense, if those traditional restrictions are lifted, and one is allowed to use absolutely any mathematical apparatus one desires. The most general closed form solutions are hairy and aren't usually particularly useful, though, so that's pretty much a mere curiosity.

Numerical solutions are easier, in one sense, however in general numeric algorithms for solving polynomials suffer from quite serious problems with precision and convergence, and dealing in the general case with such things is a deep speciality, and essentially extremely hard, a fact that is not widely appreciated by those who haven't either studied numerical analysis or been bitten by non-convergent or incorrect numeric results.

We thus have the situation that, absolutely no matter how you approach it, the overall problem of dealing with general polynomials is inherently hard, despite the fact that certain special cases become easy with increased understanding.

Simple math is easy once one understands, but that's a special case.


Math is hard only if you make it hard. Algebra, for example, is just a set of rules for people who don't understand what they're doing. Do you mean arithmetic? Algebra is a beautiful area of mathematics, and bears no resemblance to your statement.

I see that I need to plant my tongue even more firmly in my cheek. Remember, this is a quote from BARBI! -- TomStambaugh


Yes it was tongue in cheek, but the great lesson I have learnt in the last 10 years or so about maths is that it is mind-bogglingly easy once you understand it, and the hardest part about understanding it is coming to grips with how mind-numbingly simple what is being said is.

But YouCantLearnSomethingUntilYouAlreadyAlmostKnowIt, which, in this case, implies that you must have acquired at least some routine with the 'something' in question before the AHA can occur.

It is however a touch like that last sentence. A great big windbag of ideas all mashed together. Each simple. Together complex, but only by weight of numbers of trivial ideas that are stacked up.

Example. I once read a lot of maths that said something simple like

    :    If I have stuff.
    :    If Stuff never spontaneously vanishes.
    :    If I only ever add more stuff.
    :    and there is no such thing as negative Stuff

: I never wind up with less Stuff.

The complications typically arise from the accumulation of a large number of almost trite statements. See Group theory, Error correcting codes, cosets, AbelianGroups, and the like for real examples of very heavy (trivial) powerful mathematics.

Then again, everything you quote above is elementary, certainly not what I would call 'heavy'. We certainly aim for simplicity wherever possible, but this is not the same as triviality.


Perhaps SoakTime applies...


Personally, I find that math is relatively easy as long as it stays close to concepts I can visualize, but can get very hard otherwise. All the above stays easy, and even large pieces of Galois theory or complex analysis stay easy. The most general abstract rings, though, are hard to work with, and so is anything involving the axiom of choice. -- JoshuaGrosse

One might replace "math" by "programming" in much of this page. Curiously, the ProgrammingIsHard page was last edited 4.3 years ago (November 3, 1997).


Can we talk about serious math, please? Define, please.


Serious math? Well, here's a certain amount of talking past each other here. Some of us have backgrounds that let us nod sagely to "Anything involving the axiom of choice tends to be confusing" and some of us have backgrounds that encourage us to confuse math with arithmetic. (Out there in the BigRoom, there's enormous unevenness in the level of formal technical education of software people, in the priority that different formal education programs place on advanced math, and in the ability of different educators to get the idea across.) In case there's anyone reading this who wants a peek into serious math from the weak-background side, I nominate two books. GoedelEscherBach is a beautiful relatively gentle and indirect introduction, and it won the Pulitzer Prize, so it's easy to find. Computability and Unsolvability ISBN 0486614719 is beautiful in its own more brutally direct way, it's cheap and available (republished by Dover), and its appendices give a complete statement of a subtle proof of a difficult problem (nominated by Hilbert as particularly important, and finally solved fifty years later) expressed in terms of basic algebra.


I'm lost. Would you place calculus in serious math here or not? No. Analysis, yes. Does that help? No, how about topology?

Where is this going? How can replying "yes" or "no" help? In what sense are you "lost"?

I'm lost since I feel I've not covered enough areas of mathematics (or non-mathematics as it appears) to imagine what the differentiation between maths and other non-math figure manipulation is. So you said calculus isn't... and probably most simpler things either. I'm trying to find an area of study from each category that I'm familiar with. Thanks for your efforts.

Study of what? Category of what? Certainly, topology is serious enough for some people to find it hard to maintain concentration if they try to read through an entire topology textbook at one go... or even several chapters.

What kind of analysis are we talking about here as being 'serious' mathematics? Or better yet: what kind and where can we see some examples so we've a better idea of what's 'serious' or not?

The 'analysis' referred to was the theory that underlies calculus.


Math is easier if you use a computer to help (ie running MathematicaLanguage, Maple, Derive or similar). To see more detailed steps ie axioms, proofs, PrologLanguage can offer insight. You might not be able to take a laptop or HandHeld with you into tests but if you work through many problems using these tools, you get a feel for the higher level operations of the problem, then can go back over them completely by hand - even the multiplication and long division, without so much as a calculator (the opposite extreme). -- HapAngery


Are the following "real math" ...

  1. Prove that for any function f from the reals to the reals such that (a) f is continuous, (b) f(0) < 0 and (c) f(1) > 0, there exists x such that 0 < x < 1 and f(x)=0

  2. Prove that for any finite partially ordered set (P,R) that is not totally ordered, there exist two elements a and b of P such that in a randomly chosen linear extension L of R, the probability that (a,b) is in L lies between 1/3 and 2/3 (inclusive).

    • A partially ordered set (P,R) is a set P and a subset R of PxP such that
      1. (a,a) is never in R
      2. if (a,b) is in R and (b,c) is in R then (a,c) is in R

    • A linear extension of R is a subset L of PxP such that
      1. R is a subset of L
      2. If a is not equal to b, then either (a,b) is in L or (b,a) is in L

  3. Prove that for any subgroup S of a group G, the size of S divides the size of G

  4. Prove that the completion of the rationals (using Cauchy sequences) is the same as the set obtained by Dedekind cuts of the rationals.

Yes it is mathematics. Now convert to SymbolicLogic and provide the proofs using SecondOrderLogic?

Why, what will that accomplish? My point was to provide examples for people of "real math" rather than arithmetic or equation manipulation. Calculus is advanced equation manipulation, whereas math, broadly speaking, is proving things.

if you cast your example as pure logic and showed the results same way it would be evident that proofs are just also advanced equation manipulation. But "equations" of the form a -> b,a <-> b, or a |- b instead of a=b which is what you are alluding to. All mathematics can be reduced to sets and logic. Proofs are usually written in a combination of natural language and symbols making them seem mysterious but each step in the proof is just an application of a previous axiom or theorem, which can be represented with "forall","exists","^","v","~","->" etc. Less algorithmic than multiplication, differentiation etc but essentially pattern matching nonetheless.

So, are you claiming that the recent proof of FermatsLastTheorem is just equation manipulation? Hmm - interesting viewpoint.

In fact it is. Not easy manipulation (I agree that MathIsHard) but yes substitution of one equation for another - there is no mystery. It is a problem of navigating through a (large) search space same as finding the next best move in chess. Often theorems from branches used to prove theorems in another branch may seem unrelated making the task difficult. In the case of FLT it follows directly from the TaniyamaShimuraWeilConjecture; Andrew Wiles knew if he could prove that he would have it and used Group Theory to do so. He spent many years, locked himself away in his study, carried a notebook around to do sub-proofs, and even then errors were discovered after his initial announcement and he had to go back and re-work it for several more months. But in the end no matter how complex the proof it consists of discrete steps logically derived from expressions in the problem "environment" (that is what a proof is). Again - try to convert your problems above and their proofs to logic, and at least for these smaller problems it would be clear. For FLT the proof is many pages long but one could also convert every step to SymbolicLogic and show one step follows as substitution from the others. In formal logic you are supposed to put a "justification" showing line #s of the derivations used in every single line and the rule of substitution, theorem or assumption used. Example:"1,2 &I" means derive the current line by combining lines 1 & 2 using &-introduction. In mathematics this is often relaxed and the proof will say things like "it follows from eq 1 and 2 that..." and is not as explicit or structured however it is always possible to make it more rigorous. In the end deriving proofs is not more esoteric than doing arithmetic or calculus. It is a similar process (ProductionRules?), just more detailed. There is a whole branch of mathematics dedicated to the study of proofs themselves called ProofTheory.

As a pure math PhD (GraphTheory and combinatorics) who has done courses in ProofTheory and foundations of mathematics, you seem to me either to be missing the point or being deliberately unhelpful. I honestly can't tell which. No one ever does difficult and complex math in the way you suggest, and talking about it that way in this context seems to me to be highly misleading. Thinking about it as pure equation manipulation seems most unhelpful. Do you believe you are helping others to understand "real math" by discussing it in those terms?

You are, of course, absolutely right that all of mathematics can be rewritten in symbolic logic, and all proofs are pure manipulations of equations, but that doesn't help people understand the way it's actually done, or what it is that mathematicians do. It's not just "really hard sums", and describing it as such seems to me dishonest in this context.

I suspect that we're probably mostly in agreement over all the details. Seeing the detail, however, is often a barrier to gaining understanding. I believe that an understanding of why a chessboard with opposite corners removed cannot be tiled exactly with dominoes is not gained by reading a proof couched in symbolic logic and using second order predicate calculus.

As someone who has written ArtificialIntelligence and NaturalDeduction? systems, and is of the opinion that using computers as an aid to understand mathematics is extremely helpful, there is pretty much no other way to encode axioms and theorems to produce proofs for a computer unless you approach it using logic (whether you use scheme, erlang, prolog, haskel or whatever to implement it). By seeing the computer do what takes a person hours just a few seconds or minutes, not only does it reduce the perception of difficulty but certainly gives insight into the underlying processes. Mathematicians are increasingly using computer-assisted proofs in actual work. Example:http://www.risc.uni-linz.ac.at/research/theorem/

Your initial statements made it seem as if proofs are somehow magical in relation to other mathematical processes, I am simply pointing out that they are not. It was personally liberating when I learned that there were such unifying principles to the diverse branches mathematics, I am just sharing that. Even to understand Calculus and Arithmetic properly you need proofs {forall <a,x,n> elems R, d(a*x^n)/dx=n*a*x^(n-1) is a theorem that has a proof as you know}. Your suggesting that proofs are very different (and therefore more inaccessible) without offering a methodology for dealing with them makes math seem even harder and potentially creates MentalHandcuffs. You just seem to be approaching it as a pure mathematician, I am approaching it as a practising developer that believes hard tasks can be made easier by machines, which is relevant for this page. Maths is just another domain. Same could be done for someone wanting to understand Chemical Equations (ChemistryIsHard??), or the GeneticCode. Once encoded as logic you can always translate proofs to natural language, it is just much harder to go the other way (language is ambiguous). You are entitled to disagree but why not state what process you would consider helpful for people wanting to understand proofs, instead of saying "don't just encode as logic"? Even if you say to "just think" about it or use metaphor, your internal MentalModel s are using representations pretty close to logic whether you are aware of it or not (SemanticNet s have been shown equivalent). Why not externalise the process? Take us step by step in your own words how you would lead someone to understand the chess tiling problem above if not by 2nd order encoding


I think math is fairly easy if it is explained to you in a way that you can relate to. The approach to "connect" with one person may fail for another. For example, gambling is for many an excellent way to teach the fundimentals of probability theory because one can live the emotion and have direct experience in gambling. Probability is no longer an "abstract" concept if you have lived it. Sports stats is another way.


How math is taught makes it hard. I remember my first calculus class the first thing the "teacher" put on the board was an integral sign. Huh? I never ever got it. Later I found books that described why and how calculus was created and it made sense. In discrete math I didn't understand a lot of stuff because it was all abstract. If schools actually cared about teaching math they could do a lot better.

I've had similar experiences. When I learn math as history it makes more sense to me. Math is a human invention, like technology or literature. Each part of it was invented by some human for some reason. But I also think everything should be taught as history. No matter how universal a subject seems, someone in the past invented or discovered it.

Agreed. Although I would like to see things taught as discovery, not as history.

History is the record of discovery (and invention).

To clarify, I would rather have students recapitulate the discovery process physically rather than just read about it.

What do you mean by "recapitulate"? If you mean trace the paths already taken, then we agree. If you mean make the discoveries anew then you're talking about thousands of years worth of effort by the greatest minds humanity has produced. I don't think it's reasonable to expect that from 16 years of education.

It would be interesting to consider how education might differ in much longer lived species. Maybe you could have a more faithful trace. As for our pitifully short lives a large degree of compression is necessary.

We are at the upper end of the animal longevity spectrum.

On earth. In this eon.


Maths illiterate

Maybe maths is hard, maybe it isn't. But it's like anything else - if you don't take the time to learn it and expect to just know it automatically, you'll find it tricky, impossible even. Obvious, really, but I think it's worth saying. Maths is another language, and not your first, natural language. If you've never taken the time to learn this language and it's unfamiliar to you, then of course you're not going to be able to understand things written in that language. Having not become at least partially fluent in the language of maths and claiming you don't understand maths isn't a sign of a lack of intelligence, it's just a sign of not having taken the time to learn the damn language yet. Of course there's more, probably much more, than just knowing the language but that is the place to start I reckon.

Even if you do realize the above, and set out to learn maths, there seems to me to be a lack of learning material (books, web-sites etc.) that aims to teach maths to people who are at the moment, maths illiterate.

As a complete maths beginner, I'd like to have something that helps me translate things written in maths into English. Even if that isn't, in the long run once somewhat proficient, a very likely or good way of doing things. To start with that's what I feel I need to do. I want a kind of Maths to English Dictionary and Phrase Book. An aid to translating maths to English. A basic usage guide to the language of size, order and shape. A reference/guidance book for people who are not familiar with the maths language.

That type of material is sorely missing at the moment IMO; If there are any mathematicians out there who'd be interested in doing something like the above, paper or web-based *please* do - let me know if you're interested as I'd very much like to work on such a thing myself, but someone who is proficient in maths is obviously necessary. Someone who doesn't know maths would also be very useful though for comments/testing. I studied design and know html so could do and would be very happy to do the production side of things. Let me know if you're interested: b-e-n-d-at-f-r-e-e-n-e-t.c-o.u-k (remove all hyphens and replace at with @. My name's Ben.)

Anyway, here's a small number of things that I have found useful for learning maths:

Via this web page "Math for the laymen" http://www.cs.trinity.edu/About/The_Courses/cs301/math-for-the-layman/index.html, I found out about the book "Mathematics for the Million" by Lancelot Hogben ISBN 039331071X . It's pretty old (originally written in 1936, I'm currently reading a 1967 edition. there is a newer version but I'm not sure how much it differs from the one I'm reading - I've compared the few pages Amazon allows you to see (of 1999 version) and it's exactly the same (as 1967 version)). It's too verbose IMO. Too heavy on the history for me, nonetheless it's worthwhile, helpful and understandable I'm finding. A modern, more concise book, with more straight explanations, but in a similar line (aimed at non-mathematicians) to that would be great, but as far as I know doesn't exist.

I have found "The Prentice Hall Encylopedia of Mathematics" (ISBN 0136960138 ) to be quite good for random browsing (great bathroom book!), looking up things you're a little fuzzy on, and easily digest explanations. It also has a pretty good bibliography for each section if you become interested in a deeper discussion. It's also quite nice in that even 'simple' math concepts are explained in such a way as to give further insight; that may be a property of my not paying enough attention back in high schoool, rather than something intrinsic to the book.

In my struggle to try and learn maths (I was trying to get my head round a book on fractals, but I didn't know the language at all so it was most frustrating), I was recommended to read a book on analysis. I started reading one but I was unable to understand it past page 5 :( But from that advice I found this free book called "Basic Concepts of Mathematics" by Elias Zakon http://www.trillia.com/zakon1.html (the intro recommends to start at chapter 2). This is written well - cleanly and well described. Not too hard to understand. Doesn't rely on too much previous knowledge. And there's another book, also free, on analysis by the same author that follows on from the Basic Concepts book.


Some suggestions if you find MathIsHard:


When I started learning maths, I came across a barrier. I started very late (I was 20ish), so I didn't have a lot of confidence in my natural abilities. I started with first year university texts. They were teaching various subjects, but all had one message in common: "DONT USE YOUR INTUITION. LOOK AT THE PRIME NUMBER THEOREM. LOOK AT UNCOUNTABLE SETS. LOOK AT RUSSELS PARADOX. YOUR INTUITION IS WRONG." So I promptly turned off my intuition, especially my geometric intuition. I struggled with calculus for months, and whenever my brain started to visualize a function I would say "stop it", and return to my rigorous axiomatic proofs. I never really got the hang of calculus, or trigonometry. But I loved set theory. Then I read VisualComplexAnalysis, which said "understand things with your geometrical intuition, and only then go back and learn the proofs". I think this is the right advice for every area of maths. The geometrical interpretation should always be taught first, and should be taught thoroughly so that it comes to mind easily. Yes it's important to be rigorous, but it's more important to be able to think about the subject at all, and insisting on rigour too early prevents this. -- RobbieCarlton


Relevant pages mentioned above: ProgrammingIsHard, GoldenRuleOfAlgebra, YouCantLearnSomethingUntilYouAlreadyAlmostKnowIt, MentalHandcuffs

See also TheoremProving, MathematicaLanguage, MathWiki,


CategoryMath


EditText of this page (last edited January 3, 2011) or FindPage with title or text search