My first degree was in math. I've frequently discovered that I don't understand the notation in math papers. What does the paper mean by the harpoon (↼ arrow with half its arrowhead) and how is it different than the waved arrow (⬿ arrow with a wavy shaft)? Each discipline has its own conventions, is the backslash a set difference operator or is it matrix division. Is alpha, α, an angle in some trig equation or is it the minimum assured score value in alpha-beta pruning? It takes a bit of time to dive into a math paper. Even the universally understood symbol for integration (∫) can mean different things. Is it Riemann integration or is it Lebesgue integration? Is it symbolic or is it a numerical approximation. It depends upon context, and the context is communicated between mathematicians by the subject of the course, the preceding results in or paper, or just a few asides by a professor giving a lecture.<p>Computer scientists (I've been one for roughly 50 years) introduce their own academic notations. Is circled plus, ⊕, a boolean exclusive-or or is it bitwise exclusive-or. Take a look at Knuth Vol 4A, it's chock full of mathematical notations embedded in algorithms. He uses superscripts that are themselves superscripts, how are those supposed to be entered with our text editors?. What about sequence notations like 1, 4, 9, 16, ... we might suppose that it is just the integer squares, but the <i>On-line Encyclopedia of Integer Sequences</i> lists 187 other possibilities. Is the compiler supposed to guess what this is?<p>Well, if mathematicians use these concise notations, why shouldn't programmers? I believe it is because mathematicians don't want or need to take the time and space needed to spell out these operators, variables, and functions in their papers. It's not necessary for mathematicians. Other specialists in their field can figure out what the symbols mean while reading their papers. Their students can understand what a blackboard capital F (𝔽), likely a field in a class on abstract algebra.<p>Programmers are doing something different. Their programs are going to be a lot longer than most math papers or lecture expositions. The programs have to deal with data, networks, business rules, hardware limits, etc. And everything in a program must be unambiguous and precise. Programs are large and can be edited many times by many people. For these reasons, I'm inclined to favor programing in plain language with plain ascii.<p>See:<p>Knuth, <i>The Art of Computer Programming Vol 4A, Combinatorial Algorithms</i><p><i>The on-line encyclopedia of integer sequences</i>, <a href="https://oeis.org" rel="nofollow">https://oeis.org</a>