I'm exhausted by single-process programming languages. I think it's totally not worth our time to care at all about this problem any more. We need to figure out how to build systems more broadly. We're stealing useful energy pretending like such a small subproblem still has actual distinction & interest.<p>There's a lot of different ways to make code. But at this point there's just so little evidence that languages engender interesting new capabilities. There's some flourishistic differences, but most code reads fairly the same, if you squint. Rust is by far one of the most interesting languages, and it's only interesting adds are... constraints. We need some new frontiers of possibility, not just constraint. Languages are not leading us to new potentials, these days.
Mojo [1]<p>Mojo is another approach to solving "the two-language problem," similar to what Julia is trying to do, but the code looks like Python. Mojo is being designed by Chris Lattner [2], who knows a thing or two about language & compiler design.<p>[1] <a href="https://en.wikipedia.org/wiki/Mojo_(programming_language)" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Mojo_(programming_language)</a><p>[2] <a href="https://en.wikipedia.org/wiki/Chris_Lattner" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Chris_Lattner</a>
Typescript is the best thing that happened to programming languages since ruby.<p>If only it could run outside of js realm - there is almost nothing more to desire from a general purpose application language (vs system/lowlevel ones).
<a href="https://www.unison-lang.org/" rel="nofollow noreferrer">https://www.unison-lang.org/</a><p>I am excited about the problems that go away under the unison model.<p>For example:<p>- no dependency version resolution<p>- fully incremental builds<p>- no merge conflicts<p>- fully incremental distribution<p>- easy, fine-grained tree shaking<p>- no code formatting (except for display in your editor)
While not all "languages" persay, I am excited about the various ML compilation efforts:<p><a href="https://github.com/merrymercy/awesome-tensor-compilers">https://github.com/merrymercy/awesome-tensor-compilers</a><p>Modern ML training/inference is inefficient, and lacks any portability. These frameworks are how that changes...<p>As random examples, TVM runs LLaMA on Vulkan faster than PyTorch CUDA, and AITemplate almost doubles the speed of Stable Diffusion. Triton somewhat speeds up PyTorch training in the few repos that use it now, and should help AMD hardware even more than Nvidia.
Probably older than year or two, but still very interesting PLs:<p>1. Unison<p>2. Gleam<p>3. Futhark (and similar GPU compute PLs)<p>4. Lisp-s with static typing / algebraic type systems
I'm really excited about Rust! Its focus on memory safety and performance makes it a promising language for systems programming. Additionally, Python's popularity continues to grow, thanks to its simplicity and versatility for various domains.