One thing to note is that there is no universally agreed upon convention for denoting math objects. Textbook and research paper start with a section on notation to clarify the symbols they will use. Notation often varies between fields, academic schools, and sometimes there are even differences between the notation you would use when writing on a blackboard vs. the notation you would use in print.<p>That being said, for the most basic concepts the notation is pretty consistent so if you skim through one or two books you'll be able to get the feel for it. Understanding the actual math—that will take longer.<p>As for references, here is a very comprehensive standard, ISO 80000-2 that defines recommendations for many of the math symbols, with mentions of other variations:
<a href="https://people.engr.ncsu.edu/jwilson/files/mathsigns.pdf#page=10" rel="nofollow">https://people.engr.ncsu.edu/jwilson/files/mathsigns.pdf#pag...</a><p>For something shorter (and less complete), you can also check the notation appendices in my books:
<a href="https://minireference.com/static/excerpts/noBSguide_v5_preview.pdf#page=136" rel="nofollow">https://minireference.com/static/excerpts/noBSguide_v5_previ...</a>
<a href="https://minireference.com/static/excerpts/noBSguide2LA_preview.pdf#page=159" rel="nofollow">https://minireference.com/static/excerpts/noBSguide2LA_previ...</a>
Here are a few notation resources I've found helpful when teaching myself computer science:<p>- Mathematics for Computer Science: <a href="https://courses.csail.mit.edu/6.042/spring17/mcs.pdf" rel="nofollow">https://courses.csail.mit.edu/6.042/spring17/mcs.pdf</a><p>- Calculus Made Easy: <a href="http://calculusmadeeasy.org" rel="nofollow">http://calculusmadeeasy.org</a><p>Not directly related to your question but useful for interviews and programming puzzles nonetheless:<p>- Algorithms and Data Structures, The Basic Toolbox: <a href="https://people.mpi-inf.mpg.de/~mehlhorn/ftp/Mehlhorn-Sanders-Toolbox.pdf" rel="nofollow">https://people.mpi-inf.mpg.de/~mehlhorn/ftp/Mehlhorn-Sanders...</a><p>- Basic Proof Techniques: <a href="https://www.cse.wustl.edu/~cytron/547Pages/f14/IntroToProofs_Final.pdf" rel="nofollow">https://www.cse.wustl.edu/~cytron/547Pages/f14/IntroToProofs...</a>
For interviews and programming puzzles you only need to know notation for basic mathematical logic, basic set theory and the summation notation and maybe some bits and pieces from number theory:<p><a href="https://en.wikipedia.org/wiki/Logical_connective#Common_logical_connectives" rel="nofollow">https://en.wikipedia.org/wiki/Logical_connective#Common_logi...</a><p><a href="https://en.wikipedia.org/wiki/Quantifier_(logic)#Notation" rel="nofollow">https://en.wikipedia.org/wiki/Quantifier_(logic)#Notation</a><p><a href="https://en.wikipedia.org/wiki/Set_theory#Basic_concepts_and_notation" rel="nofollow">https://en.wikipedia.org/wiki/Set_theory#Basic_concepts_and_...</a><p><a href="https://en.wikipedia.org/wiki/Summation#Capital-sigma_notation" rel="nofollow">https://en.wikipedia.org/wiki/Summation#Capital-sigma_notati...</a><p><a href="https://en.wikipedia.org/wiki/Modular_arithmetic#Congruence" rel="nofollow">https://en.wikipedia.org/wiki/Modular_arithmetic#Congruence</a><p>If Wikipedia is too hard to follow, you can learn this from early chapters of a discrete mathematics textbook.
There is a good book for math notation that I like:<p>"Mathematical Notation: A Guide for Engineers and Scientists"<p><a href="https://www.amazon.com/Mathematical-Notation-Guide-Engineers-Scientists/dp/1466230525" rel="nofollow">https://www.amazon.com/Mathematical-Notation-Guide-Engineers...</a>
I'd argue that any attempt at understanding mathematical notation universally will fail. Different fields and different sub-topics and different authors have vastly different conventions, for good reason.<p>Sure, one can perhaps expect that something that uses an integral sign shares some properties with ordinary integration of real functions, but to really understand what the notation entails, one really has to study the underlying material.<p>I feel that what you're asking for is kind of akin to wanting to read a novel in a foreign language using only a dictionary of the 10% most commonly used words of said language, with each entry resolving only to one meaning of the word.
You internalise mathematical notation by using it to solve mathematical problems and express mathematical ideas.<p>Two excellent resources are:<p>1. Introduction to Mathematical Thinking (if you prefer moocs) - <a href="https://www.coursera.org/learn/mathematical-thinking" rel="nofollow">https://www.coursera.org/learn/mathematical-thinking</a>?<p>2. How to think Like a Mathematican - <a href="https://www.amazon.co.uk/How-Think-Like-Mathematician-Undergraduate/dp/052171978X" rel="nofollow">https://www.amazon.co.uk/How-Think-Like-Mathematician-Underg...</a>
In addition to some great responses already on here, I would suggest picking up a functional programming language as a way to bridge the gap between math and the C-style syntax that most of us learned to program in. Haskell and PureScript are good for this; many programs actually use even more mathy aliases for common tokens (e.g. `∀` for `forall`).
What does understand mean? Notation is just that, notation.<p>I think that the single biggest advantage one can have (in programming that does something "non-trivial" - loaded term I know, rather than as a person) is to have a firm grasp of the mathematical basis of their work. It's so much easier to start something new when you can derive it yourself.<p>If you have the time, I recommend "Advanced Engineering mathematics" for the gap between calculus to applications and other topics like Linear Algebra, analysis, and graph theory.<p>If you just want a mapping of symbols to words try LaTeX documentation<p>A la <a href="https://oeis.org/wiki/List_of_LaTeX_mathematical_symbols" rel="nofollow">https://oeis.org/wiki/List_of_LaTeX_mathematical_symbols</a>
To understand basic notation like summations and matrix multiplications, I created Math to Code which is a quick tutorial to translate math into NumPy code:<p><a href="https://mathtocode.com/" rel="nofollow">https://mathtocode.com/</a><p>Previous HN discussion / it was on the front page earlier this week:<p><a href="https://news.ycombinator.com/item?id=23513438" rel="nofollow">https://news.ycombinator.com/item?id=23513438</a>
There is no rule of mathematical notation except that things that are written as an index (whether as a subscript or superscript or argument) are stuff the given object depends upon. Everything else builds upon that rule and is defined in some context.<p>Source: I am a mathematician
Not an article or course but this Math as code cheat sheet is pretty good: <a href="https://github.com/Jam3/math-as-code" rel="nofollow">https://github.com/Jam3/math-as-code</a>
In addition to other comments, I would also recommend "A Programmer's Introduction to Mathematics" by Dr. Jeremy Kun [0]. The HN submission [1] may have more interesting stuff around the topic.<p>[0] <a href="https://pimbook.org/" rel="nofollow">https://pimbook.org/</a><p>[1] <a href="https://news.ycombinator.com/item?id=18579076" rel="nofollow">https://news.ycombinator.com/item?id=18579076</a>
Notation varies depending on the author and subject area but a good resource for "programmer/computer science" notation is to skim through <i>Concrete Mathematics</i> or the preliminaries to <i>The Art of Computer Programming</i> -- I find this notation to be common.<p>In more specialized areas like type theory, first order logic, predicate calculus, temporal logic, etc you have to pick it up as you go.
This won't solve all your problems, but it _can_ be a big help to know what to search when you see a wall of symbols, and detexify.kirelabs.org is a decent resource for that -- you can draw a single symbol and get the latex code that would generate it.<p>(if you're typesetting math it's invaluable, not just decent)
There are a lot in Abromowitz and Stegun handbook, last section "Index of Notation". It's not quite what you're asking for, but it's fairly authoritative.