TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Implementing Interactive Languages

80 pointsby luualmost 2 years ago

18 comments

vector_spacesalmost 2 years ago
&gt; an interactive language - one where code is often run immediately after writing<p>Sorry if I&#x27;m ignorant, but I thought the term interactive language usually referred to languages where the code was run and modified in real time, while it is running, as in Smalltalk or Self, using something akin to Erlang&#x27;s hot reload capabilities?[1] I think the viewpoint in that paradigm is to &quot;cut out the middleman&quot; of having a UI altogether, and to treat code as the UI, effectively<p>It feels like the author is describing interactive languages in the sense of IPython or REPL based programming, unless I pretty badly misunderstood. I was interested because I am in the process of writing a baby Smalltalk, although nowhere near as sophisticated as what the author seems to be building, I&#x27;m just starting with learning this stuff. I still enjoyed reading though<p>[1] <a href="https:&#x2F;&#x2F;en.m.wikipedia.org&#x2F;wiki&#x2F;Interactive_programming" rel="nofollow noreferrer">https:&#x2F;&#x2F;en.m.wikipedia.org&#x2F;wiki&#x2F;Interactive_programming</a>
gumbyalmost 2 years ago
Just do both. You could easily do it with the lisp machines of the late 70s and it’s even easier today.<p>Use an interpreter to execute code interactively. That’s what the user wants anyway.<p>Compile code in a background thread or process. When it’s ready map it into memory, and swap your interpreted references for the compiled version. Make the GOT writable and update it as necessary — sort of like the UUO mechanism in ITS or CP&#x2F;M syscall table.
评论 #37263449 未加载
lispmalmost 2 years ago
SBCL has an interactive&#x2F;incremental&#x2F;in-memory native code compiler and a file compiler.<p><a href="https:&#x2F;&#x2F;www.sbcl.org&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;www.sbcl.org&#x2F;</a>
评论 #37262709 未加载
评论 #37266461 未加载
评论 #37267822 未加载
talismanickalmost 2 years ago
&gt; Futumura projections (eg truffle, rpython)<p>Pretty sure RPython&#x2F;PyPy dropped partial evaluation in favor of a metatracing JIT: <a href="https:&#x2F;&#x2F;www.pypy.org&#x2F;posts&#x2F;2018&#x2F;09&#x2F;the-first-15-years-of-pypy-3412615975376972020.html#why-did-we-abandon-partial-evaluation" rel="nofollow noreferrer">https:&#x2F;&#x2F;www.pypy.org&#x2F;posts&#x2F;2018&#x2F;09&#x2F;the-first-15-years-of-pyp...</a>
samsquirealmost 2 years ago
I am a beginner in this space but I have some thoughts.<p>The JVM is slow to startup but when it starts it is fast. I think it&#x27;s slow due to how classes are arranged and lookup for classes. I think Graal or CDS works to speed this up.<p>I like the idea of a process that is fast to startup or is ran in the background such as php-fpm.<p>My JIT compiler is an incomplete toy but I only compile a function when it is called.<p><a href="https:&#x2F;&#x2F;GitHub.com&#x2F;samsquire&#x2F;compiler">https:&#x2F;&#x2F;GitHub.com&#x2F;samsquire&#x2F;compiler</a><p>I have written about the idea of supercompilation which is the idea that we can apply lots of resources to compilation to create the most performant execution. But it requires you to be upfront and more detailed about your program semantics for safe reordering.<p>I think this stuff is sufficiently complicated that it is hard to read the code for without an explanation of what is going on.
owlstuffingalmost 2 years ago
Check out project manifold[1]. It integrates languages at the type system level using a technique the project labels “type manifold”, which is described as JIT *static* code generation.<p>This project targets the JVM, but the general model could be adopted with any static type system &#x2F; compiler.<p>1. <a href="https:&#x2F;&#x2F;github.com&#x2F;manifold-systems&#x2F;manifold#what-can-you-do-with-manifold">https:&#x2F;&#x2F;github.com&#x2F;manifold-systems&#x2F;manifold#what-can-you-do...</a>
lhamesalmost 2 years ago
Regarding LLVM&#x27;s JIT infrastructure: You can plug your own compiler into it if LLVM is not fast enough. You can also mix and match multiple compilers within a single JIT&#x27;d program.<p>The LLVM JIT APIs operate in terms of abstract &quot;materialization&quot;, and provide an in-memory, just-in-time linker to link object files into the process. You just have to write a materializer that calls your compiler, then hands the object back to the LLVM JIT APIs to be linked.<p>The advantage you get from using LLVM&#x27;s JIT APIs to wrap your compiler are: 1) It can manage compilation requests from multiple threads of JIT&#x27;d code, and it can dispatch compilation work to multiple threads (or other processes). 2) It has built in support for lazy compilation, so you don&#x27;t need to write this part yourself. 3) It can JIT across process boundaries (and architecture, object format and OS boundaries). 4) It supports many object format features (e.g. exceptions, general dynamic TLS, static initializers, etc.)
eatonphilalmost 2 years ago
While he mentions the slowness of LLVM, it would have been cool to see Jamie&#x27;s thoughts on tinycc and qbe as well. I&#x27;ve been looking into the fastest options for generating and executing machine code (without me doing it all myself; generating and compiling C feels like a happy medium).
评论 #37270671 未加载
noelwelshalmost 2 years ago
The answer is Graal or RPython, which for not very good reasons they &quot;immediately discard&quot;.
评论 #37263580 未加载
评论 #37261344 未加载
transfirealmost 2 years ago
Crystal (<a href="https:&#x2F;&#x2F;crystal-lang.org&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;crystal-lang.org&#x2F;</a>) has both a compiler and an interpreter.
w10-1almost 2 years ago
I&#x27;m not getting it.<p>How is this different from REPL mode in python, swift, or java? (There was even a REPL for Java 1.1.) Both the javac and eclipse compilers have incremental compilation, so they save state and do only the necessary updates -- and provide effective feedback at each step. One interesting project would be to use an incremental linker with clang&#x2F;swift.<p>For human-interactive programming, the key feature is not speed of execution (or really compilation&#x2F;build), but the clarity of the feedback and code-assist. IDE&#x27;s are growng by leaps and bounds in content-assist features, and shell languages have blossomed of late by supporting type-completion almost as well as IDE&#x27;s.<p>For interactive use, type-inference in the language would seem to be a key requirement, and that means a robust type system, likely reduced to a constraint solving algorithm. (Has anyone has made one of those work in an incremental mode?)<p>Finally, for ground-up or semantic prototyping, I would think a good way to try a new language&#x2F;compiler combination would be Haskell, where people are starting to support incremental compilation. Here pandoc could be an interesting code base because it interprets multiple (markup and document) languages with the same AST. (Which itself suggests the investigating implementations for DSL&#x27;s...)<p>So the article would seem to ignore longstanding work streams in this space, to focus on the wrong features, and not to consider some relevant alternatives.<p>So... it&#x27;s easy to miss the forest for the trees? I must be missing something...
评论 #37264129 未加载
评论 #37264329 未加载
mpweiheralmost 2 years ago
Objective-S is aiming right into that sweet spot.<p><a href="https:&#x2F;&#x2F;objective.st" rel="nofollow noreferrer">https:&#x2F;&#x2F;objective.st</a><p>The dichotomy between &quot;interactive, but slow&quot; and &quot;fast, but extremely slow to compile&quot; is a false one. Similar to the one we had of &quot;easy-to-use but crashy client operating systems (Windows, MacOS)&quot; vs. &quot;solid but user hostile server OSes&quot;. Now my watch runs Unix.<p>For Objective-S, this is a side-quest, but an important one.<p><a href="https:&#x2F;&#x2F;blog.metaobject.com&#x2F;2019&#x2F;12&#x2F;the-4-stages-of-objective-smalltalk.html" rel="nofollow noreferrer">https:&#x2F;&#x2F;blog.metaobject.com&#x2F;2019&#x2F;12&#x2F;the-4-stages-of-objectiv...</a><p>I made 3 false starts with LLVM, but each time got discouraged by the crazy binary sizes, compile times and sheer conceptual overhead, all for (mostly) a slew of optimisations that yield extremely diminishing returns in the first place, and mostly aren&#x27;t applicable anyway.<p>I now have a tiny (and still incomplete) AOT native compiler for aarch64 in addition to the AST-walker and couldn&#x27;t be happier. Produces .o files, can JIT in principle, now looking at getting sufficient linker functionality in there to directly create executable and dylibs&#x2F;frameworks. Oh, and ELF support.
评论 #37262697 未加载
nsmalmost 2 years ago
One option is to write your language as a transpile to Racket and then leverage the existing ChezScheme implementation for performance. Racket is usually faster than Python although initial process startup is slightly slower than CPython.
JonChesterfieldalmost 2 years ago
It&#x27;s an interesting article. Has some oversights but still a good design space to think about.<p>I like the Futamura projections approach. The associated description of write a partial evaluator and you get a compiler for free is starting to look like an inside joke - compilers are easier to write than partial evaluators - but the Graal&#x2F;RPython approach where someone else has already written the partial evaluator and you strap on your own interpreter has definite merit.<p>There&#x27;s also this community wisdom, echoed in the article, that python can&#x27;t be made fast because the runtime extensible language semantics get in the way. Somehow that is known to be true despite javascript having the same properties <i>and</i> very fast implementations, and people are still holding onto it as truth after Mojo starts talking about statically compiling the static parts of the program. Also the self and lisp implementations that came before.<p>--------<p>For what it&#x27;s worth, my current thinking on this goes:<p>0&#x2F; Representation. Define a static single assignment form bytecode<p>1&#x2F; Front end. Translate source semantics into that bytecode as simply as possible<p>2&#x2F; Runtime. Write a machine code implementation of each opcode used by 1&#x2F;<p>3&#x2F; Back end. Compile the bytecode to a series of (machine code) calls to said opcodes<p>That&#x27;s the baseline compiler. No baseline interpreter involved. Main design goal of each step is to do the absolute bare minimum work possible. An argument could be made that the above describes a template jit if you do the translation at runtime, or a naive ahead of time compiler if you do it before running the program. They&#x27;re the same thing really.<p>Going from there to non-baseline compiler involves increasingly heroic optimisations on the SSA form and adding operations to the bytecode that correspond to things like memory operations on the target machine.<p>--------<p>The compiler runtime is worth a mention as it&#x27;s something which doesn&#x27;t get much love in compiler implementation discussion. I didn&#x27;t realise it existed before I started working on these things. A compiler is a translator from language A to B. Wherever something in A doesn&#x27;t have an obvious counterpart in B - maybe A has hash tables and B is x86 machine code - either the compiler emits code that makes the thing happen anyway, or it emits a call to a pre-written function.<p>This might be integer division written as a loop for hardware that doesn&#x27;t have integer divide, or it might be an openmp offloading construct parametrised on a function pointer. You can essentially move complexity out of the compiler and into the supporting library, and functionality can move between the two at will. If the runtime library is written in (or compiled to) the compiler intermediate representation instead of machine code, you can get the same performance and semantics as having the compiler build the same code on the fly.
评论 #37260312 未加载
评论 #37261805 未加载
评论 #37263672 未加载
zabzonkalmost 2 years ago
Forth?
评论 #37261147 未加载
simondanischalmost 2 years ago
improving Julia&#x27;s interpreter and code caching should get you pretty close to that dream language! and it&#x27;s already a great, existing language with a nice ecosystem :)
suyashalmost 2 years ago
&quot;an interactive language - one where code is often run immediately after writing&quot;<p>I believe you need some kind of server that is listening to the code changes and executing the program really quickly, all that will happen so quick that it will feel it&#x27;s all in real time, similar to how &#x27;live coding&#x27; works on web browser. Does that help?
danielvaughnalmost 2 years ago
Not directly related, but this made me think of something I&#x27;ve been interested in recently - structured editors. Instead of tokenizing text and then parsing to an AST, you effectively edit the AST directly.<p>Since the thrust of the post seems to be about the sum of compilation + run time, it&#x27;s a potentially more efficient alternative to traditional code editing. Here&#x27;s an example of one in action:<p><a href="https:&#x2F;&#x2F;tylr.fun&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;tylr.fun&#x2F;</a>