> And need I remind anybody that you cannot buy a monochrome screen anymore? Syntax-coloring editors are the default. Why not make color part of the syntax? Why not tell the compiler about protected code regions by putting them on a framed light gray background? Or provide hints about likely and unlikely code paths with a green or red background tint?<p>I'm colorblind, please never do that. Syntax highlighting is fine since it's another way to help, but color having some kind of importance so that pink code and violet code run differently would be hell for me.<p>Edit: something else: color looks different depending on your computer. Consider how many complaints I already see here about people developing UI for extra wide monitors while most people are on 15 inches screen, that would be terrible. Don't even get me started about arguments on "is this blue, blueish green or green?". Colors are way more subjective than what people seem to think.<p>> For some reason computer people are so conservative that we still find it more uncompromisingly important for our source code to be compatible with a Teletype ASR-33 terminal and its 1963-vintage ASCII table than it is for us to be able to express our intentions clearly.<p>And no new letters have been added to English or French lately. They seem to be doing just fine. Typing those new symbols would be hell if keyboards are not designed for it.
The Unison language stores code as a syntax tree and they're planning to support multiple alternative syntaxes for the language.<p><a href="https://www.unisonweb.org" rel="nofollow">https://www.unisonweb.org</a><p>Unison’s core idea is that code is immutable and identified by its content. This lets us reimagine many aspects of how a programming language works. We simplify codebase management — Unison has no builds, no dependency conflicts, and renaming things is trivial.<p>Some blog posts by the author of the language describing a bit of background<p><a href="https://pchiusano.github.io/2013-05-22/future-of-software.html" rel="nofollow">https://pchiusano.github.io/2013-05-22/future-of-software.ht...</a><p><a href="https://pchiusano.github.io/2013-09-10/type-systems-and-ux-example.html" rel="nofollow">https://pchiusano.github.io/2013-09-10/type-systems-and-ux-e...</a>
>> For some reason computer people are so conservative [...]<p>Well, one of the underlying reasons for the lack of imagination might be... <i>keyboards</i>. If keyboard keys were small e-ink displays, easily configurable and accessible by programs, programmers would have come up with a lot of interesting stuff already. We do it with function icons in regular interfaces. If we could intergrate with keyboards, we'd definitely take advantage of it.<p>Now, there might be many more reasons. The article also mentions subroutines displayed horizontally and other stuff. That could definitely be done too, but... while we aren't there yet, many interfaces definitely make good use of horizontal screen space.<p>The main problem is that to do any of these, you kinda require coordination beyond the scope of solving a single technical problem. Unless the right hardware is available to enough people, custom symbols and keys and whatever would only work experimentally. And it would be a worthy experiment, but developing a language is already enough work to also have to add a custom revolutionary IDE to the mix, in the context of experimentation. In the current economic system, when the path to market is long and unclear most good ideas die anonymously.
The best thing about using only ASCII for the main syntax is that everyone can type it with their keyboard. I think the recent fad of supporting Unicode identifiers is misguided. Of course, Unicode should be permitted in string literals, but not in the code itself.<p>*Also I don't think Go is better than modern C++, though it might be better than C++99 which was the main standard when the article was written.
A programming language designed by a Mac user might make use of the symbols §, ±, ≤ and ≥, among others. I think the tyranny of ASCII is really the tyranny of the tragically poor support for entering characters beyond a small national-language set on the commonly-used OSes, especially Windows. (macOS is significantly better here but no utopia.) Windows-1252, MacRoman and so on may not be the standard character sets any more, but you wouldn't know it from what the OSes make easy to type!
If I wanna use something like a ™ I have to google how to enter it, or just google and copy the character. I don't want any extra glyphs in my code until it's just as easy to enter (at close to full speed) as the glyphs I already have access to.
> we still find it more uncompromisingly important for our source code to be compatible with a Teletype ASR-33 terminal and its 1963-vintage ASCII table than it is for us to be able to express our intentions clearly.<p>D is a fully Unicode language - comments can be Unicode, the builtin documentation generator can be Unicode, even Unicode characters in the identifier. I've been tempted to add Unicode characters for operators many times.<p>The trouble is the keyboard.<p>I've seen many ways to enter Unicode. I even invented a new way to do it for my text editor. All awkward and unsatisfactory. No way to touch type them, either.<p>What is needed is a dynamically remappable keyboard, with a configurable display on each keytop so you know which letter it is. Nobody is going to remember what the remapped letters are without it.
I've recently dipped my toe into APL, and I tend to agree.<p>APL only really advances from <i>lines of ASCII text</i> to <i>lines of Unicode text</i>. It's still very line-oriented. If Unicode expands toward Tex/LaTeX/&c it could look a lot more like written math. Subscripts are nice, but there's more to it.<p>Unicode entry is <i>so poorly supported</i> on macOS & Windows it's really not funny. Emacs (C-x 8 ENTER) is my best bet, or Xah Lee's website &c.<p>If one were to design a new APL, one would be tempted to just use the few characters Apple lets you type with an Option or Command key. (and of course I can't easily type those symbols so I refer to them by six letters each—that's likely not going to change)<p>All APL needs though (EDIT: and really I mean <i>any language</i>, as long as we have common sequences), is <i>leader key sequences</i>. Rho is `r. To me, it always will be. The interface [1] has this down pat. We should be able to type anything in Unicode with just a few of the 104 keys. Like how T9 allows 12 buttons to type 26, &c.<p>The world is never going to build a 12,000-key Unicode keyboard. We're going to have to use leader sequences. Just my (beginner's) opinion.<p>[1] even tryapl.org/ !
If it's not on the keyboard, nobody is going to type it.<p>Solutions:<p>a) only use characters in the intersection of the top N most popular keyboard layouts<p>b) issue programmers' keyboards with an agreed character set<p>c) issue programmers' keypads with supplementary characters<p>d) add on-screen supplementary keypads
People should listen to this guy. He's got some perspective. The main reason we keep things the way they are is something like tradition, or some kind of psychological effects where we want to fit in. Whatever it is, it's not reason. All the rationalizations come after.<p>Definitely should be able to put functions in the horizontal space, use colors as part of the syntax, use Unicode symbols where it is warranted.<p>I would go farther and say that we should have at least some ability to edit things in a non-serialized way, like a WYSIWYG math formula for example.<p>I hope people will also explore structural program editing. <a href="https://en.m.wikipedia.org/wiki/Structure_editor" rel="nofollow">https://en.m.wikipedia.org/wiki/Structure_editor</a><p>Almost forgot, one more crazy idea: switch to larger instantly reconfigurable touchscreen keyboards to allow more symbols to be entered easily.
> It was certainly a fair tradeoff—just think about how fast you type yourself—but the price for this temporal frugality was a whole new class of hard-to-spot bugs in C code.<p>> Niklaus Wirth tried to undo some of the damage in Pascal, and the bickering over begin and end would no } take.<p>"1970 - Niklaus Wirth creates Pascal, a procedural language. Critics immediately denounce Pascal because it uses "x := x + y" syntax instead of the more familiar C-like "x = x + y". This criticism happens in spite of the fact that C has not yet been invented." <a href="http://james-iry.blogspot.com/2009/05/brief-incomplete-and-mostly-wrong.html" rel="nofollow">http://james-iry.blogspot.com/2009/05/brief-incomplete-and-m...</a>
> How desperate the hunt for glyphs is in syntax design is exemplified by how Guido van Rossum did away with the canonical scope delimiters in Python, relying instead on indentation for this purpose. What could possibly be of such high value that a syntax designer would brave the controversy this caused? A high-value pair of matching glyphs, { and }, for other use in his syntax could. (This decision also made it impossible to write Fortran programs in Python, a laudable achievement in its own right.)<p>The irony here is that Python <i>does</i> have an open-scope delimiter. It is the colon. What it lacks is a close-scope delimiter. But you can hack one using the PASS statement and emacs auto-indent, and in my code I do this so that my Python code always auto-indents correctly. Without this you cannot reliably cut-and-paste Python code because you can't count on leading white space being correctly preserved.
> Why do we still have to name variables OmegaZero when our computers now know how to render 0x03a9+0x2080 properly?<p>Majority of developers are not from Greece; they don’t have the keys on their keyboard. Technically, modern C# supports Unicode just fine, here’s an example.<p><pre><code> using System;
using System.Collections.Generic;
using System.Linq;
static class Program
{
static double Σ( this IEnumerable<double> elements ) =>
elements.Sum();
static void Main( string[] args )
{
double[] α = new double[ 3 ] { 1, 2, 3 };
Console.WriteLine( α.Σ() );
}
}
</code></pre>
> Why not make color part of the syntax?<p>Similar reason, because input becomes more complicated. You gonna need to either memorize hotkeys, or reach for the mouse. Also copy-pasting UX becomes way too complicated.
> Why keep trying to cram an expressive syntax into the straitjacket of the 95 glyphs of ASCII when Unicode has been the new black for most of the past decade??<p>Because it must be possible to actually <i>type</i> syntax.<p>The problem is a physical one: keyboards are limited in space, we need alphabets, punctuation, a bunch of control keys, and finally we have a very small amount of space left to fit some arbitrary symbols - might as well be ASCII ones. The only way to truly break away from the ASCII table would be to start making giant non-standard keyboards... and now you have a huge inclusivity barrier, not to mention the impracticality of physically huge keyboards.<p>This all seems like a lot of effort and argument over something even more superficial than language syntax - they are only glyphs... how does using more and different glyphs substantially change anything?
>>> world's second write-only programming language<p>I am not clever enough to comment on the meat of the article but I love that quote :-)
This really strikes me as a "not even wrong" post. I'm not sure there is anything wrong with programming and if there is something wrong with it, the problem sure isn't "there are not enough operators."<p>My favorite languages don't even have the _idea_ of operators and in languages with custom operators, with all their crazy ass rules about precedence and content-free representations, I'm always re-translating the code to s-expressions in my mind.<p>The Scheme way seems fine to me - operator-like functions when the meaning is universally understood and then human-readable names for literally everything else.
This really made me smile:<p>> Programmers are a picky bunch when it comes to syntax, and it is a sobering thought that one of the most rapidly adopted programming languages of all time, Perl, barely had one for the longest time. The funny thing is, what syntax designers are really fighting about is not so much the proper and best syntax for the expression of ideas in a machine-understandable programming language as it is the proper and most efficient use of the ASCII table real estate.
Most people who think they see ASCII dependence are actually perceiving standard keyboard dependence. Modern keyboards all ape the Model-M and all modern programming languages are constrained by Model-M compatibility. ASCII is an implementation detail.
Very few “radical ideas” for programming language formats pass the “easier to use than typing” test, let alone the “so much easier it might merit jumping off path dependence” test.
The author wonders,<p>“But programs are still decisively vertical, to the point of being horizontally challenged. Why can't we pull minor scopes and subroutines out in that right-hand space and thus make them supportive to the understanding of the main body of code?”<p>Which made me wonder if there might be some way to make an editor do something along these lines, without changing the programming language. Similarly with his speculations about using color.
Yeah, this couldn’t be more misguided. It would be the very definition of a boondoggle.<p>The key point here is chunking. Mentally you can process the + as a plus sign. It’s been single concept. It’s association with addition probably means the reader can make useful inferences about what it does.<p>Now consider the Greek letter ζ (zeta). For anyone unfamiliar with that it would be more than one chunk as you would probably try to remember the shape.<p>Worse, you may have no idea how to input it.<p>And to get any of this we’d have to deal with encodings. For what, exactly?<p>And sure, not everyone is familiar with the Latin script that dominates ASCII but you have to pick something and for better or for worse English is the lingua franca of programming.<p>And don’t even get me started on the insanity that is Unicode attempting to assign a code point to every symbol ever imagined and then extending that with compound code points and modifiers.
I think it's good to think outside the box. Maybe incorporating some tried and true Unicode would make sense.<p>However, I don't think the issue is ASCII but rather consistency. I'm making a C-competetor language and using "c-like syntax" except I'm enforcing consistency. () is always one or more statements, with the last expression returning the value. {} is always representing concrete data like a struct, array or function args/result. []? That's a piece of a type name, allowing the developer to name something Foo[U32; Str]<p>fn foo: {a: U32, b: Str} -> U32 (
if a > 5 then b + "is greater then 5"
else b + "lte 5"
)
)
I agree with the conclusion of the article; we should move past the requirements of a long-gone era of computing, but the suggestions provided don't feel like improvements to me.<p>> Why keep trying to cram an expressive syntax into the straitjacket of the 95 glyphs of ASCII when Unicode has been the new black for most of the past decade?<p>Because I don't have 143,859 keys on my keyboard. Having to type λ would involve either changing my keyboard layout or using some key shortcut. In either case I'd miss the benefit of muscle memory to just type `func`.<p>I don't see ASCII as a hindrance, but as the lowest common denominator for being expressive in any language, computer or human (Romance ones at least).<p>> Why not make color part of the syntax? Why not tell the compiler about protected code regions by putting them on a framed light gray background? Or provide hints about likely and unlikely code paths with a green or red background tint?<p>Why should a compiler be concerned about how errors are displayed? You can already do these things with a program that reads the output and renders it as you wish. We could argue that the information should be available in a machine-readable format to avoid parsing text, but I don't see how a deeper integration of color would help.<p>The things that I <i>would</i> like to see in the next 20 years of programming are:<p>- Abandoning text files and filesystems as an abstraction. Most programming languages are fine with handling only "modules", so having to manage files and file paths just feels unnecessarily clunky.<p>Light Table[1] attempted something like this IIRC, where you only dealt with functions or modules directly, and it was a joy to use. Expanding this to a language would simplify things a lot.<p>- Speaking of which, code versioning systems should abandon that concept as well. It's 2021 and the state of the art is still diffing pieces of text and doing guesswork at best to try and complete an automatic merge.<p>Our versioning tools should be smarter, programming language aware and should minimize the amount of interaction they require. I can't imagine the amount of person-hours ~~wasted~~invested on trying to understand and make Git work.<p>- Maybe just a fantasy and slightly off-topic for this discussion, but I'd like to see more programmer-friendly operating systems. With the way Windows and macOS are going, Linux is becoming more like the last bastion for any general programming work. Using these systems often feels like the computer is controlling me rather than the reverse.<p>[1]: <a href="http://lighttable.com/" rel="nofollow">http://lighttable.com/</a>
This will work once we have keycaps with e-ink or similar technology and this will be super cheap for everyone. Then we all will have keyboards with infinite number of characters and new chapter of typing will begin.
Wow. The same thing I was thinking about. We built a fundamentally flawed system when it comes to ASCII, strings and human languages.<p>Computer language needs to abstract time and space representation. Solving will improve human - computer interfacing problem. Connect two worlds with commonality, humans writing instructions is not that.<p>MOV command is the single source of truth. It does something with space and involves time. Start there and build upwards.
I use Unicode characters in Java identifiers as much as I can get away with; I wrote a code generator that embeds all kinds of funny brackets that are ‘meaningful’ as an emergent property.<p>For entry I have symbols cut-and-pasted faster than most people type. Also the completion feature of the IDE works just fine.
Discussed at the time:<p><i>Programming languages have to break free from the tyranny of ASCII</i> - <a href="https://news.ycombinator.com/item?id=1850938" rel="nofollow">https://news.ycombinator.com/item?id=1850938</a> - Oct 2010 (116 comments)
We literally have to:
<a href="https://github.com/ASCII-Rightholders/the_characters" rel="nofollow">https://github.com/ASCII-Rightholders/the_characters</a><p>Unless, of course, we want to buy the not too generously priced license
Why not just use ligatures? For example => turns into an arrow in some fonts. This could likely be extended. I think the major issue is with accessibility. How do we make sure whatever solution can be used by most?
Python could be variable font language. The more you indent, the smaller the font gets. You could always see the whole program on one page. Editor would zoom automatically of course.
As much as I admire Mr Kamp's work, I think this is nonsense. Just because something looks familiar doesn't really mean it is the same. There are good reasons to pick syntax that is familiar. Novelty comes at a cost.<p>And I also think he is wrong if he thinks languages will be improved by making them harder to type and read.<p>I've spent a considerable amount of time trying to understand code written by someone in China. All the comments in that project are Chinese - which I don't understand. Now imagine using symbol names in Chinese. Or Hangul. Or Russian. Or in Baybayin script. Or Sanskrit.
The article's argument in a nutshell:<p>We need to break free from the tyranny of the characters on our keyboard, and express ourselves using characters not on our keyboard.<p>So I hope you see why that argument keeps failing.
> My disappointment with Rob Pike's Go language is that the rest of the world has moved on from ASCII, but he did not.<p>Um, the Go reference explicitly states that "Source code is Unicode text encoded in UTF-8" [1], so I'm not sure what the hell this guy is talking about. The language itself maybe? Well contrary to the author of the article I'm really no fan of Rob Pike and I think he's a massive arrogant prick, but in this particular case he was absolutely right to be pragmatic for the language syntax and let people be stupid enough to use inaccessible characters in their source code. Which, come to think of it, actually flies in the face of the root principle of Go as stated by Pike himself, which is to shield dumb rookie Google developers from their supposed inexperience.<p>[1] <a href="https://golang.org/ref/spec#Source_code_representation" rel="nofollow">https://golang.org/ref/spec#Source_code_representation</a>
I noticed Rust added unicode support for writing code recently, does anyone know if it can be disabled per crate?<p>I don't think having symbols that cannot be typed or pronounced is such a good idea..
"When I was a child, I used to speak like a child, think like a child, reason like a child; when I became a man, I did away with childish things.<p>[..]<p>Syntax highlighting is juvenile. When I was a child, I was taught arithmetic using colored rods. I grew up and today I use monochromatic numerals." - Rob Pike