TLDR: The author independently re-discovered what you may know as Old Code Syndrome.<p>I think that's because mathematical papers place too much value on terseness and abstraction over exposition and intuition.<p>This guy's basically in the position of a fairly new developer who's just been asked to do non-trivial update of his own code for the first time. All those clever one-liners he put into his code made him feel smart and got the job done at the time. But he's now beginning to realize that if he keeps doing that, he's going to be cursed by his future self when he pulls up the code a few months later (never mind five years!) and has zero memory of how it actually works.<p>I'm not intending to disparage the author; I've been there, and if you've been a software developer for a while you've likely been there too.<p>Any decent programmer with enough experience will tell you the fix is to add some comments (more expository text than "it is obvious that..." or "the reader will quickly see..."), unit tests (concrete examples of abstract concepts), give variables and procedures descriptive names (The Wave Decomposition Lemma instead of Lemma 4.16), etc.
My response is that for every 100 of these types of papers, one of them may prove to be pivotal or inspirational in something truly groundbreaking and functionally useful. For this reason, I am all for 100 different people spending their time doing things like this, because eventually one of them will make an impact that is greater than 100x the efforts of 100 normal men.<p>It's just a different kind of "brick in the wall" - only the diamonds in the rough can turn out to be hugely important for something else in the future.
This does not surprise me in the least.<p>Math was always extremely easy for me growing up. Up through my first differential equations class I found almost everything trivial to learn (the one exception is that I always found proving things difficult).<p>I made the mistake of minoring in math and that slowly killed my enjoyment of it. Once I got to differential geometry and advanced matrix theory it all just became too abstract and I just wanted to get away from it.<p>For several years after college I would routinely pull my advanced calculus text out and do problems "for fun". After a while I stopped doing that. Within a few years of no longer being exposed to math, I found it all incredibly foreign and challenging, to the point where I would say I have a bit of an aversion/phobia to it.<p>I'm trying to reverse that now by tackling a topic I'm interested in but have previously avoided due to the math-heavy nature of it - type theory.<p>Hopefully I can find the joy in math again through this.<p>I think my point is that you can lose competence in math very very quickly through lack of constant exposure.<p>The same is probably true of programming but I hope to never end up in that position.
An interesting read. But I think the author should have explicitly written out the point he is really making: you can't be too careful about making your writing clear, even to yourself. I recall reading (I'd point to the book with a link if I could remember in what book I read this) that mathematicians who occasionally write expository articles on mathematics for the general public are often told by their professional colleagues, fellow research mathematicians, "Hey, I really liked your article [name of popular article] and I got a lot out of reading it." The book claimed that if mathematicians made a conscious effort to write understandably to members of the general public, their mathematics research would have more influence on other research mathematicians. That sounds like an important experiment to try for an early-career mathematician.<p>More generally, in the very excellent book <i>The Sense of Style: The Thinking Person's Guide to Writing in the 21st Century</i>,[1] author and researcher Steven Pinker makes the point that the hardest thing for any writer to do is to avoid the "curse of knowledge," assuming that readers know what you know as they read your writing. It's HARD to write about something you know well without skipping lots of steps in reasoning and details of the topic that are unknown to most of your readers. This is one of the best reasons for any writer to submit manuscripts to an editor (or a set of friends, as Paul Graham does) before publishing.<p>And, yes, if you think what I wrote above is unclear, as I fear it is, please let me know what's confusing about what I wrote. I'd be glad to hear your suggestions of how to make my main point more clear. I'm trying to say that anyone who writes anything has to put extra effort into making his point clear.<p>[1] <a href="http://www.amazon.com/gp/product/B00INIYG74/" rel="nofollow">http://www.amazon.com/gp/product/B00INIYG74/</a>
I've been working with Haskell* for a couple of years and it is quite often that I work with code that I don't fully understand. I'll come across a terse bit of code, then carefully take it apart to see what it does (by taking bits and pieces out and giving them names instead of passing in using point-free notation and also adding type annotations). Once I see the whole picture, I make my own change and then <i>carefully re-assemble the original terse bit of code</i>. One could ask the question: wasn't the verbose version better? I'm going to lean on the side of no. If I left this verbose and other bits verbose then it would be hard to see the whole picture.<p>I think doing maths would be better if it was done interactively with software. If equations were code then you could blow it up and look into fine details and then shrink it to a terse form while software keeps track of the transformations to make sure what you write is equivalent. Maybe it's time to add a laptop to that paper pad?<p>* not arguing anything language specific here except that Haskell makes use of variety of notations that makes the code shorter and more like mahts. More so than most languages.
All of these arguments are arguments for replacing the mathematics curriculum with video gaming. Games require generalized problem solving (arguably better-generalized than math, and arguably better-transferrable to other domains). Games build character: grit and tenacity, cautious optimism, etc blah blah etc. And games are fun (for many more people than find math fun).<p>Guess math teachers should start learning to play League of Legends and Pokemon.<p>Alternatively, I guess we need better reasons than those to teach a subject.
Math seems to have a very <i>ephemeral</i> lifetime in the brain. I skipped a year of college once, and when I returned I realized I had to basically abandon <i>any</i> major with a math requirement, because I had seemingly forgotten <i>everything</i>.<p>I'm currently struggling with an online Machine Learning class (the Coursera one... at the tender age of 43), and I can only take it (so far, at least... just failed my first quiz... fortunately I can review and re-take) because I was rather <i>obsessed</i> with matrices, oh, about 28 years ago. "You mean I can rotate things in an n-dimensional space without using trig?"
I'm truly shocked by the multiple people in the thread who claim that Math knowledge can be completely erased through as little as a year of non-practice.<p>For me, Math has always resembled riding a bike more than anything else. Sure, the first few moments, the path is a bit overgrown and all the weeds need to be cleared off but it was always significantly easier revisiting a topic than understanding it for the first time.<p>For those who forget so quickly, I wonder if you felt like you truly understood it in the first place?
It speaks to how finite we are around "knowledge." At the moment we reach understanding, we experience a sophomoric feeling of confidence. But as it fades farther and farther from our working memory, we become less fluent and more hesitant. The emerging pattern becomes one of "I can understand these interesting concepts, but it takes a lot of work and they don't last, so I have to choose to understand practically and situationally." And then in the end our bodies and personalities turn out to control our minds more than we might want to believe, as we turn away from one problem and towards a different one on some whim, never able to view the whole.<p>As I recognize this more in myself, I am more inclined to become a bit of a librarian and develop better methods of personal note-taking and information retrieval, so that I lose less each time my mind flutters. At the moment that's turned into a fascination with mind maps - every time I need to critically-think through a problem I start mapping it. In the future I might look into ways of searching through those maps.
When I was taking machine learning courses and reading machine learning textbooks a few years ago, I have fond recollections of the derivations from Tom Mitchell's textbook.<p><a href="http://www.cs.cmu.edu/~tom/mlbook.html" rel="nofollow">http://www.cs.cmu.edu/~tom/mlbook.html</a><p>Where other textbooks tended to jump two or three steps ahead with a comment about the steps being "obvious" or "trivial", Mitchell would just include each little step.<p>Yes, you could argue it was my responsibility to remember all of my calculus and linear algebra. But it is kind to the reader to spell out the little steps, for those of us who maybe forgot some of our calculus tricks, or maybe don't even have all of the expected pre-requisites but are trying to press on anyway. Or actually know how to perform the steps but have to stop and puzzle through which particular combination of steps you are describing as "obvious" in this particular instance.<p>I just remember how nice it was to have those extra steps spelled out, and how much more pleasant it made reading Tom's book.<p>So thanks, Dr. Mitchell!
> I have attempted to deliver [these lectures] in a spirit that should be recommended to all students embarking on the writing of their PhD theses: imagine that you are explaining your ideas to your former smart, but ignorant, self, at the beginning of your studies!<p>-Richard Feynman
First, I am not sure <i>Functional Analysis</i> is as obscure as some other areas. But, second, this just shows, once again, that one ought never to use "clearly," "obviously" etc in proofs.<p>It is the same principle as writing programs so they are easier for the next programmer to read. That person may be you.
> Beyond scarcely stretching the boundaries of obscure mathematical knowledge, what tangible difference has a PhD made to my life?<p>The same thing a bachelors degree does for everyone else. You've proven that you can start, stick with, and complete a task that takes multiple years and a complicated set of steps.
I had several exceptional Math teachers throughout my education, but the piece of advice that stuck with me the most is:<p>"If you're not sure what you want to do with your life, study Math. Why? Because Math teaches you how to think."<p>The skills I learned studying Mathematics have been invaluable, the Math that I currently am able to recall is abysmal.<p>The author did a great job calling this out succinctly: Mathematics is an excellent proxy for problem-solving
As a PhD in spplied math, I must say I concur wholeheartedly with the author. The true value of a PhD in a quantitative field is less about specific domain knowledge, and more in the set of general problem solving skills you pick up.
Math is the shadow universe of physics. Most theorems may not look like they are useful for anything real world till someone is able to peg all the variables to real world. And then as if by magic we realize we already know how how the real world behaves. Till someone does this pegging, the theorems sit idle waiting for problems to solve. I believe this is actually a good thing. We are letting people find solutions before someone finds problems to use them for.
Reminds me of this (well without the forgetting part, but I do that with old code all the time)<p><a href="http://matt.might.net/articles/phd-school-in-pictures/" rel="nofollow">http://matt.might.net/articles/phd-school-in-pictures/</a>
If you find yourself saying that you gained nothing from your education other than soft skills, maybe you should have passed over the functional analysis part and put the effort directly into learning said soft skills. I'm in the same boat, and I can see how it can be hard to admit this.
My PhD is in physics, from 20+ years ago, and I would not be able to explain or defend it today without studying it for a while. I've even forgotten the language (Pascal) that I wrote my experimental control and analysis code in.<p>My experiment formed the basis of a fairly productive new research program for my thesis advisor, so at least it lived on in somebody's brain, but not in mine. ;-)
I've also forgotten basically all high level math from school. And have to re-learn when the occasion comes to use some of it. But one thing that occurred to me is that in school I just learned how to make the calculations, so I never got a deep understanding on how things worked anyway. And that's fine.
Someone doesn't understand his own work five years later to this extent, this is a strong indication that the work is actually garbage, and the prior understanding five years ago was only a delusion brought on by the circumstances: the late nights, the pressure, and so on.<p>Perhaps it doesn't make sense today because it never did, and the self deception has long worn off, not because the author has gone daft.<p>Several weeks ago, on the last work day before going on vacation, I submitted fixes for nine issues I found in one USB host controller driver. The last time I looked at the code was more than a year ago. I had refactored it and really improved its quality. Looking at some of the code now, I couldn't understand it that well. But that's because it wasn't as good as I thought it was. I was still relying on the fictitious story of how I thought certain aspects of the code worked really well thanks to me, and <i>it wasn't meshing with the reality emanating from freshly reading it with more critical eyes</i>. And, of course, I was also confronted by a reproducible crash. As I'm reading the code, I'm forced to throw away the false delusions and replace them with reality. This is because I'm smarter and fresher today, not because I've forgotten things and gotten dumber! It's taking effort because something is actually being <i>done</i>.<p>Perhaps a similar problem is here: he's reading the paper with more critical eyes and seeing aspects that don't match the fake memory of how great that paper was, which was formed by clouded judgment at the time of writing. Maybe that obscure notation that he can't understand is actually incorrect garbage. His brain is reeling because it's actually digging into the material and trying to do proper <i>work</i>, perhaps for the first time.<p>If you can show that your five year old work is incorrect garbage, that suggests you're actually superior today to your former self from five years ago. So that could be the thing to do. Don't read the paper assuming that it's right, and you've gone daft. Catch where you went wrong.<p>By the way, I never have this problem with good code. I can go back a decade and everything is wonderful. Let's just say there is a suspicious smell if you can't decipher your old work.<p>Good work is clear, and based on a correct understanding which matches that work. There is a durable, robust relationship between the latent memory of that work and the actual work, making it easy to jog your memory.
It would have been cool if the original was linked here <a href="http://fjmubeen.com/2016/02/14/202/" rel="nofollow">http://fjmubeen.com/2016/02/14/202/</a> and not the medium repost. But still interesting.
This post inspired me to re-read my thesis (well browse through it). Although it has been 16 years since I last looked at it, I didn’t have any problem understanding it and I didn’t even really cringe reading it. I guess it depends on your field how bad this effect is.
The dissemination of knowledge is at least as important as its discovery. Accessibility (i.e. clarity of exposition, availability to the public, etc.) needs to become a cardinal virtue in research.
The author almost realized the much more important conclusion of the fact he lived. He shouldn't conclude the article by asking "what is the purpose of studying maths?" and then giving an three stupid answers.<p>He should have asked: is this actually "knowledge" as they say academia brings to society? Is the money researchers earn being well spent? Did I actually deserve to be remunerated by this piece of work no one understands -- and, in fact, no one has read except for maybe three people?
> <i>what is the purpose of studying maths?
Mathematics is an excellent proxy for problem-solving /
Mathematics embeds character in students /
Mathematics is fun</i><p>Those may be reasons to study maths (although, studying anything seriously probably yields comparable benefits) but doing a PhD and writing a thesis is not only about yourself: it's supposed to <i>advance the field</i>. It's something you do for the general community.
As someone who left academia after a PhD in math (been working as a quant in HFT for the last few years, which mostly involves coding in one form or another), I can totally relate! Back then, all those stochastic integrals and measures made much more sense. However, it doesn't seem totally alien -- I'm pretty sure I could go to hacking math if required, but it would require at least several months to get into the flow.
> Mathematics is an excellent proxy for problem-solving<p>In my experience, earning a PhD in [redacted] was excellent training in problem solving. And in developing working expertise in new areas. I suspect that the choice of field is indeed irrelevant.<p>> Mathematics embeds character in students<p>I'd say that <i>actually finishing</i> a PhD does that.<p>> Mathematics is fun<p>Whatever you pick for your dissertation topic had better be fun ;)
I had a similar problem with a crypto presentation. Basically, I was angling for a free ticket to an expensive conference. The trick was to propose something that is plausible, but too arcane for practical use. The consolation was a free ticket. Problem was that they accepted the talk. Damn!<p>So, I started to read crypto journals. Basically anything co-authored by Chaum, Micali, Goldreich, Wigderson. After a few weeks, I starting to get the hang of it. Sort of like learning a new language. So, I gave the presentation and then forgot about it.<p>A few years later, I decided to show my powerpoint to someone and describe the process. WTF? How did this lead to that? Didn't understand half of it. Was really embarrassing.
Could part of it just be because Mathematical notation is just so bad? It's more of a shorthand than an actual tool of conveying meaning. So much context goes into establishing what a notated equation means - and that context is now gone.
<i>The Sheetrock was the last step. I myself would do the exterior and interior painting. I told Ted I wanted to do at least that much, or he would have done that, too. When he himself had finished, and he had taken all the scraps I didn't want for kindling to the dump, he had me stand next to him outside and look at my new ell from thirty feet away.</i><p><i>And then he asked it: "How the hell did I</i> do <i>that?"</i> --Kurt Vonnegut, <i>Timequake</i><p>I find the experience common when I look back on things I write or design or build. As Bill Gates said, “Most people overestimate what they can do in one year and underestimate what they can do in ten years.”
It seems to me that most commenters are ignoring the fact that the author is a guy that basically left high level mathematics after completing his phd.<p>So basically he went doing other stuff not functional-analysis-related and his functional-analysis got rusty.<p>It seems quite reasonable to me. Call it old code syndrome, call it "my math got rusty", it seems quite normal to me.<p>Also: according to <a href="http://fjmubeen.com/about/" rel="nofollow">http://fjmubeen.com/about/</a>, the author got his phd in 2007. It's 2016.<p>Almost ten years. What... What are we talking about?
This happens to me all the time. I have a very popular illustrated post on Monads titled "Functors, Applicatives, and Monads in Pictures"[1]. When I wrote it I thought it was the best monad guide ever. Now, reading back, I can see that some parts are confusing. I still see a lot of people liking it, but three years later I wish some parts of it were better.<p>[1] <a href="http://adit.io/posts/2013-04-17-functors,_applicatives,_and_monads_in_pictures.html" rel="nofollow">http://adit.io/posts/2013-04-17-functors,_applicatives,_and_...</a>
He writes:<p>"Mathematics is an excellent proxy for problem-solving... Mathematics, by its concise and logical nature, lends itself to problem-solving (it is not unique in this regard)."<p>But how can we be sure this is true if is unable to read what he wrote?<p>Maybe I'm thinking about the way Clojure programmers tend to use the word "concise" -- concise is meaningful only if it contributes to readability. Otherwise the more accurate description is "terse". And terse does not lend itself to problem solving.
Reminds me to get around to Piper Harron's thesis. Which was made to be <i>seriously</i> readable.<p>Math seems to have a culture of systematically bad writing (as Bill Thurston discussed).
> Mathematics is an excellent proxy for problem-solving<p>I went to a special, maths focused high school class and this rings true on that lower level too. I am a reasonably successful programmer/architect today and I have -- repeatedly -- attributed my successful attitude toward solving my problems to the 1200 or so problems we solved during those four years. Our maths education was literally nothing else but solving one problem after the other.
It is not like riding a bike!<p>I am reading up on stats after 15 years away from the subject and even the very basic stuff I have forgotten. Although the 'muscle memory' is there so that perhaps it is a bit easier then when totally new.<p>What I also find is I am more interested in the application/intuition behind something now rather than the mechanics of the formulas. Maybe that has to do with a different aim i.e. usefulness vs. passing an exam..
To make an admittedly bad metaphor, it's likely a lot of that knowledge has been moved from main memory to cold storage, and it would take some time to bring it back. It certainly makes the case for why we write things down! Although the part about having to dig for the main result makes me think the abstract could be improved...
Math is twiddling with formal systems, and discovering how they behave, Some of it has uses, some doesn't, and some of what presently doesn't will in the fullness of time result in further islands of usefulness, as yet not even imagined. But ultimately, it needs no more <i>justification</i> than orchestral music.
This wrapper[1] helps me edit LaTeX fragment in Vim/Emacs...<p>[1]: <a href="https://github.com/linktohack/lyxit" rel="nofollow">https://github.com/linktohack/lyxit</a>
It takes about three months before a paper we ship becomes the best reference we have on the subject, exceeding our own recollection.<p>Our brains can only hold so much, especially tiny details.
We are clearly approaching the point where unassisted human intelligence is becoming insufficient to continue to master even specific domain expertise.
I took a course on formal logic on courses. I put all of the questions and exercises into anki, a spaced repitition program. This ensures I will always remember it and get it in my head at an intuitive level.<p>Basically it's like flash cards that decay exponentially. The first review is in one day, the second in two days, then 4 days, and so on.