When we talk about 'learning mathematics' it's important to recognize a huge difference between learning how to use mathematical discoveries/inventions, and learning how to develop mathematics from scratch, via the whole theorem -> proof -> new theorem route that defines 'pure mathematics'.<p>Most people are simply not going to get a lot out of learning the latter method and will indeed be turned off by it, much to the disappointment of the professional mathematicians (i.e. most college professors in maths). I'd guess > 95% of people taking higher maths courses are not going to ever develop new proofs - but they will use what they've learned in other areas, such as physics, biostatistics, finance, etc. Essentially we just take it on faith that the mathematicians got their proofs right, and we gratefully use the fruits of their labor. (They're all quite mad, those mathematicians, if you ask me)<p>Now, when you first learn how to apply maths to things like physical problems, this is where tyhe cartoons, or 'simple approximations neglecting complex factors' becomes really important to learning. You don't want to try to include friction when first examining falling weights and springs and pendulums through a physical viewpoint, for example. Later on, when you get that job with SpaceX, understanding friction in depth will be critically important, but if you don't start with the simple cartoon approximations, it'll be way too much to comprehend.<p>However, this probably wouldn't work for the real mathematicians. They've got their axioms, then from the axioms they develop proofs, then from those proofs they develop more proofs - there's no approximation or simplification involved in all that, is there?