There are several technical advantages to the types-to-the-right style. For one, it's often easier to write a parser for: having an explicit <i>let</i> or <i>val</i> keyword makes it immediately obvious to a parser without lookahead that the statement in question is a declaration, and not an expression. This is less of a problem in languages like Java, where the grammar of types is simpler, but in C and C++, if you begin a line with<p><pre><code> foo * bar
</code></pre>
then it's not yet clear to the parser (and won't be until after more tokens are observed) whether this is a variable declaration for <i>bar</i> or a statement multiplying <i>foo</i> and <i>bar</i>. This isn't a problem for name-then-type syntaxes.<p>On a related note, it's also often advantageous for <i>functions</i> to indicate their return type last, as well, especially when the return value of the thing might be a function of an earlier argument. There are plenty of examples of this in functional or dependently-typed languages, but even C++ (which historically has listed the return type of a function first) has added an alternate (slightly clunky) syntax for function types where you can specify the return type after the arguments for this reason:<p><pre><code> template<typename Container, typename Index>
auto
foo(Container& c, Index i)
-> decltype(c[i])
{ ... }</code></pre>
I don't think it has much to do with type inference. It's more that type systems became more complicated, and so did type names. And with a long composite type name, the name of the variable gets pushed too far out and obscured. It worked great in Algol, and still works pretty well in C (although that is partly because it splits the declarator to keep array and function syntax to the right of the variable name), but in C++ with templates it's already hard to read.<p>There are also a variety of issues with parsing it that way, most of which go away entirely if the name is first.
Looks to me like the world had mostly settled on types-on-the-right already in the 1970s, except C was an anomaly and languages which imitated its syntax in other ways often imitated that too.
Python type hints (since 3.5) also follow the same pattern. see: <a href="https://www.python.org/dev/peps/pep-0483/" rel="nofollow">https://www.python.org/dev/peps/pep-0483/</a>
Types are moving to the right because having them on the left makes the grammar undecidable. Languages like C or C++ can only be parsed when semantic information is passed back into the parser.<p>For example, consider the following C++ statement:<p><pre><code> a b(c);
</code></pre>
This is a declaration of `b`. If `c` is a variable, this declares `b` of type `a` and calls its constructor with the argument `c`. If `c` is a type, it declares `b` to be a function that takes a `c` and returns an `a`.
I disagree with the fact that types are moving over to the right side for readability's sake or anything like that. Modern theorem proving languages explicitly specify that type declarations are simply equivalent to set inclusion, which makes the type specification operator (usually :) equivalent to set inclusion (∈). This influence was what rubbed off onto modern MLs, and by extension, inspired many of the ML inspired/type safe languages that have come out recently.
Type annotations on the right reads way more naturally to me e.g. "customerNameToIdMap is hash map of strings to UUIDs" over "there's a hash map of strings to UUIDs called customerNameToIdMap".
Nitpick, but in the graphic at the beginning, C# is presented as being a language designed in the 21st century when it in fact was doing designed in, and released in the last year of, the 20th.
Is there a name for side on which the type declaration sits? Left/right typed? If that's the case, I'd argue for back/forward or start/end to account for right-to-left languages.
That first chart really annoys me because rather than giving us the names of the languages it uses a logo, which do not really describe what languages are if you do not recognize the logo. And generally it's an extra cognitive layer and load on your brain. Makes it pretty unreadable.
Interesting observation. Is a wide enough sample of languages considered? What scares me is that we still use text files to program in 2019 such that we are having this conversation. The author makes a good observation nonetheless.
Let me correct that for you<p><pre><code> That is essentially the way it is done in Scala (2004), F# (2005), *ActionScript 3 (2006)*,
Go (2009), Rust (2010), Kotlin (2011), TypeScript (2012), and Swift (2014) programming languages.</code></pre>
In JavaScript with prototypical inheritance, where you inherit from an instance rather than a type, it becomes very confusing what Foo would mean in a line saying Foo bar. So there some more explicit distinction is needed which is easiest to do with colon and moving to the right.
I have a question, are those languages that provide type inference inspired by HMT [1]?<p>[1]: <a href="https://en.m.wikipedia.org/wiki/Hindley–Milner_type_system" rel="nofollow">https://en.m.wikipedia.org/wiki/Hindley–Milner_type_system</a>
So much easier to read and write code with inferred types.<p>Though my co-workers obsessed with writing ‘good’ code refuse to use them. They also refuse to write comments because their code is so good ‘it documents itself’.
I'm not convinced. The article found one situation where it makes more sense for the type to be on the right, but in most situations it makes more sense for the type to be on the left. The reason being that it reads more naturally. It's the difference between saying "Golfer Tiger Woods adopted a cat" and "Tiger Woods, golfer, adopted a cat." Nobody speaks the latter.