Same goes for problems in data structures and algorithms (read: hackerrank / leetcode problems)<p>Some people will recognize patterns pretty fast, while others must solve literally hundreds of different problems, before becoming comfortable with the concepts.<p>People always seem amazed and baffled that some candidates can practically walk into white-board interviews unprepared, other than what they learned / did in their DS&A classes in college, and nail the interviews, while others have to basically prep 6-12 months before passing the same interview.
Everything is pattern matching (or memorization). You can use this approach to half-automate the solution to a known existing class of problems, but how do you come up with anything new? How did Paul Cohen came up with the forcing technique? Who figured out probabilistic proofs as a possible vector of attack?<p>"Both these properties, predictability and stability, are special to integrable systems... Since classical mechanics has dealt exclusively with integrable systems for so many years, we have been left with wrong ideas about causality. The mathematical truth, coming from non-integrable systems, is that everything is the cause of everything else: to predict what will happen tomorrow, we must take into account everything that is happening today.<p>Except in very special cases, there is no clear-cut "causality chain," relating successive events, where each one is the (only) cause of the next in line. Integrable systems are such special cases, and they have led to a view of the world as a juxtaposition of causal chains, running parallel to each other with little or no interference."<p>- Ivar Ekeland
> What’s important to recognize is that these same attributes apply across all levels of math.<p>Functional/Relational programming models are just a trivial layer on top of math. Everything is pattern recognition at the end of the day.<p>Domain modeling is the logical extension of building standardized "patterns" that can be leveraged for rapidly building & replicating similar ideas.<p>Using good modeling techniques is the most important thing for managing complex systems. If you aren't sure, you can always start modeling at 6th normal form, then walk it back to 3NF as the various pieces start to make sense together. If you have your domain in 6NF and are using purely functional/relational programming, there are mountains of mathematical guarantees you can make about the correctness of your software. For instance, 6NF gets rid of null. It forces you to deal with the notion of optional facts using 0..1-1 relations and applicable query constraints.
I like the teaching idea, but I feel like there is a step missing in here:<p>> When my students encounter a math problem they can’t answer, I have them put it in the error log with an explanation of how they did and how they knew how to do it.<p>If they can't answer it, where does the "how they knew how to do it" come from? Their teacher/tutor?
A lot of math is taught in a sloppy way, which thwarts the pattern recognition progress in brains trying to learn it.<p>Programming is significantly easier than math (for something equivalently complex) because of things like syntax checking and compiler/interpreter errors. This speeds up the pattern recognition process in the human brain.<p>People who are identified as being skilled at math or programming at a relatively early age are usually those who understood it in spite of the teacher/curriculum, so the ability comes as a surprise.<p>But many such people do not go on to distinguish themselves in either field in any way. There are always things that come easily to one person vs another, but in math and programming, the early birds are typically the only ones whose interest in the subject isn't destroyed by the teaching methods (because the learning happened in spite of them).
Mathematics is the study of patterns, any kind of pattern, in anything. "Difficulty" of maths problems is a kind of measure of how well you know the patterns involved (which is related to how good our notation, terminology, and visualisations for them are). That means research developing brand new maths or applying it to new problems is often difficult, because no one knows the patterns yet, or has good ways of describing them.
I guess its kind of obvious now that practice always helps, but for a while, mostly during high school, I used to think that being good at math because you've seen similar problems hundreds of before was kind of "cheating" and you weren't really that smart. Instead, you were smart if you managed to do a test/competition really well without doing tons of practice questions.<p>Consequently during math classes I used to sit at the back of the class and play counter strike all day on my laptop. Nobody seemed to care since I'd ace all the tests and still compete for my school in math competitions and stuff. However I completely wrecked my math education, and come university (I skipped last year of high school for uni, there's a standard program for it in my country) I had completely forgotten how to prepare for a math exam and was systematically left further behind with every year Lol.<p>Looking back I still kind of regret my perspective on doing practice problems. In hindsight it was kind of stupid but it was mostly because I thought it was kind of lame that I did well sometimes because I practiced more than other people, whereas some other students seemed to do pretty good without (seemingly) having practiced at all. On the plus side I do feel I learn things a lot faster than much people and am pretty descent at a wider variety of things
Don't most experts in most domains work the same way - recognising patterns they've seen before?<p>That's why an expert can charge so much for 1 hour of time. It is more valuable than days or weeks or months of a non-expert's time who doesn't have the library and can't recognise the pattern.
> When my students encounter a math problem they can’t answer, I have them put it in the error log with an explanation of how they did and <i>how they knew how to do it</i>.<p>I'm gonna assume a step where they learned how to do it?<p>TFA's method is for incremental discovery expertise. Feynman talks about an inverse, where he maintained a list of interesting problems, and when he learnt a new technique, tried it on each one.<p>But Feynman's actual breakthroughs came from playfully looking at phenomena.<p>I think the incremental skills are basics like reading, writing and arithmetic - it's harder to really get to grips with something you've noticed without them.<p>I mean, Einstein famously didn't have adequate math for special relativity and sought help. He was however the one to <i>notice something</i>.<p>A library of techniques is a poor substitute for actual thought.
It is more than just having the toolbox. You need to know how to use the tools and know how to combine them . Anything harder than the basics is going to require a lot of outside the box thinking and epiphanies rather than just pattern recognition
r. Very subtle and hard most of the time
A math professor of mine said that the first step to solve a math problem was to know the answer.<p>I remember during my first year buying a book called something like 1000 limit problems. I "just" did about 300. It was definitely pattern matching and nothing Mathematica wouldn't do better than me.
In discrete mathematics (combinatorics) there are definitely some tools and techniques but seemingly every problem is unique, I’m not quite sure that pattern matching is very useful in this subfield of mathematics ?
Literally all math is about adjoints, norms and fixed points<p><a href="https://github.com/adamnemecek/adjoint/" rel="nofollow">https://github.com/adamnemecek/adjoint/</a>
Getting into sort of pattern matching I think requires a certain optimism that things are "nice and symmetrical" after enough analysts. Of course, that can get us into trouble with e.g. "supersymmetry in physics", but usually I think that optimsim is a feature not a bug and necessary in any case.<p>I think instilling this optomism in students --- following their curiosity won't lead deeper in a bottomless pit, if something doesn't make sense it's might be them but a lack of information, etc. --- is the essentially hard part, and requires undoing a lot alienation people experience.<p>Conversely, I think messing around with block boxes like machine learning we don't understand is giving into the alienation. (Studying it to understand it rather than do things is fine.) I worry more use of machine learning like things will be a another nail in liberalism's coffin as do the equivalent of regressing back to alchemy from chemistry.<p>Now, looking for patterns is what machine learning does, but while Rorschach-test-style grappling in the dark might be the basal "reptiling" instinct that lead to more high level theory-based pattern renegotiation, they should not be conflated.