The unison programming language (<a href="https://www.unison-lang.org/" rel="nofollow">https://www.unison-lang.org/</a>) follows a similar idea. Functions have immutable IDs and "modifying'" a function involves creating a new function and all the callers that need to be updated to use this new function they also in turn become a new function and this bubbles up to the top. All this is assisted via tooling.<p>The unison language ecosystem leverages that property to implement distributed computation where the code can be shipped to remote workers in a very fine grained way, but I guess this building block can be used for other ideas (I don't know, didn't quite put my mind into it but sounds very interesting)
Interesting approach. A bit similar to 'test && commit || revert' (TCR) as done by Kent Beck.<p>I kind of doing this with my AoC with my literary programming approach where I add only code to the markdown file that is then processed by the MarkDownC program [1], which takes all the C fragments in de markdown file and puts them in the right order to be compilable, overwriting earlier definitions of functions and variables. So, each markdown file, one per day [2], shows all the steps how I arrived at the solution. I do use a normal editor and use copy-and-paste a lot when making new versions of a certain function.<p>[1] <a href="https://github.com/FransFaase/IParse/?tab=readme-ov-file#markdownc">https://github.com/FransFaase/IParse/?tab=readme-ov-file#mar...</a><p>[2] <a href="https://github.com/FransFaase/AdventOfCode2024/blob/main/Day22.md">https://github.com/FransFaase/AdventOfCode2024/blob/main/Day...</a>
<i>and you often discover, in the middle of writing your low-level functions, that your high-level functions need to be revised, which append-only programming makes difficult</i><p>On the other hand, it's not a problem if you start bottom-up, which is a natural style when writing in C; the low-level functions are at the top (and the standard headers included at the very top can be thought of as a sort of lowest-level), while main() is at the bottom.
I have a slightly different approach. Quoting my tweet:<p>"Once
@cognition_labs
AI Engineer Devin gets good enough, I will have it implement each feature as a series of Pull Requests:
- One or more Refactoring PRs - modify structure of existing code but no behavioral change.
- A final PR which is "append only" code - no structural change, only behavioral."<p><a href="https://x.com/realsanketp/status/1879766736742092938" rel="nofollow">https://x.com/realsanketp/status/1879766736742092938</a>
Nice as a thought experiment, but you actually <i>do</i> get this in real life as well, when maintaining a public API with a large user base (where even the details of the internal workings need to be frozen over time.)<p>Gives you lovely stuff like the Win32 API (introduced in 1993 and still very much a thing!). CreateWindow, CreateWindowEx, CreateWindowExEx (OK, I made that up...), structs with a load-bearing length field, etc. etc. And read some Raymond Chen on the abuse that customers inflict on the more 'private' stuff...
Very interesting also as a thought-provoking idea. For example<p><pre><code> - It would be less challenging if function pointers variables are used instead of function. In this case, the code appended later may override the function variables it needs to fix/change
- Since all the code is there, it is possible to invent some convention to compile/run previous versions without CVS machinery</code></pre>
Just an aside: I really like the design of this blog: it's very clean and the text is black on white. It's kinda shocking how many websites disregard basic accessibility rules about contrast and make text light gray.
Gerald Sussman (of MIT/SICP fame) has written and spoken a lot about practical ways of achieving this kind of thing. The idea is when new features are added to software you only have to write new code, not change the existing code (and, likewise, removing features is only deleting code). Having a well-defined process that allowed this kind of software development in a practical way is the dream really.
Back in the days when having your own tape reel for storing code was a thing (an upgrade from punch-cards), we used to do this .. write the first version of the code, stream it to tape, enhance the code some more, produce a diff, write the diff to tape, and on and .. on and on .. such that, to restore a working copy of a codebase, we'd rewind the tape "sync;sync;sync" and then progressively apply every diff as it was loaded from the stream. Every few months or so, we'd rewind the tape and stream the updated code to the front of it, and repeat the process again .. it was kind of fun to think that the whole tape had a working history.<p>These days of course we just use git, but there was a day that we could see the progress of a codebase by watching the diffs as they streamed in off the reels ..
I have played around a lot with GW-BASIC (and pcbasic (pip install pcbasic)) lately and this strikes me as something that could be made to work very well with old BASIC variants using line-numbers, since when the interpreter sees a new line with the same number as an existing line it will overwrite the old line. Tried this in a text file:<p><pre><code> 10 print "helo"
20 print "world"
10 print "hello"
</code></pre>
Worked as expected in both GW-BASIC 1.0 and pcbasic, printing "hello\nworld". Listing the program after loading it only shows the modified line 10.<p>A bit awkward since the BASIC editor/REPL itself can not be used. It would work for writing BASIC using a regular text editor and then just running it with BASIC as an interpreter.
If you have a large body of existing code and you want to change its behaviour, you have to work out where to add your change, without breaking what already is there.<p>My thoughts in this is to somehow create a system where additional rules or changes to behaviour have marginal cost.<p>I am interested in the Rete algorithm, a rule engine algorithm. But we could run this sort of thing at compile time to wire up system architecture.<p>Boilerplate or configuration is an enormous part of programming and I feel there really could be more tools to transform software architecture.
So... it's literally the open-closed principle, in its straw-man form? Where you <i>actually</i> can't change the old code but can only write new?<p>Well, it's ridiculous. IMO, of course but... seriously. One of the greatest (and even joyful) things about being a software developer is that you <i>can</i> change old code. Literally go there, rewrite things, and end up with a new version of code (which is presumably better in some respect).
I'm personally very sympathetic and interested in avant-garde, experimental software development methods like this. I understand that most devs reading this is mortified and doubt about my sanity, but I <i>do</i> unironically use extremely, stupidly limiting techniques like this. For example, in my current project (working on an automated theorem prover) I have this rule that development happens in epochs. I write code in file theorem_prover_v1.rs, I run some unittests, run some tests, take notes. Copy the file to theorem_prover_v2.rs and attempt to rewrite the whole thing based on the previous form. Every line is critically examined. As many lines are attempted to be modified as possible. Do we need this type? Is this mathematically sound? Can I really justify that this abstraction is necessary? [1] It's an extremely inefficient and slow process, but a lot of software engineers don't appreciate that development efficiency--although <i>usually</i> among the most important factors of the success of the project--it's not necessarily <i>the only</i> important factor for all projects, and it is <i>worth</i> experimenting with methods that are intentionally inefficient but has promise to improve something else. You don't like it, you can always go back to Agile or whatever else you do at your day job.<p>Art progresses with extreme restrictions. The same way Schoenberg put seemingly absurd restrictions in his music ((very roughly) don't repeat the same note before playing every other note etc...) to create something radically novel, we as software developers can do so to advance our art as well.<p>[1] This method is the anti-thesis of the common "never rewrite a working program" software development methodology. Here, the experiment is to see what happens if we always rewrite, and never add or modify, i.e. refactors are never allowed, instead if things need changing we need to re-design the whole thing top-bottom with the new understanding.
David Harel, the creator of statecharts, also developed the Behavioral Programming [0] 'b-thread' model motivated by a similar vision for append-only programming - it has been discussed on HN previously e.g. <a href="https://news.ycombinator.com/item?id=42060215">https://news.ycombinator.com/item?id=42060215</a><p>[0] <a href="https://cacm.acm.org/research/behavioral-programming/" rel="nofollow">https://cacm.acm.org/research/behavioral-programming/</a>
I've seen it in database schemas since so many colleagues treat ALTER TABLE as black magic. In one example, there is a crud app with projects and then later another table with one to one mapping to projects. Of course there are integrity problems and N+1s.
What software has this person written that warrants giving a damn what they have to say about programming?<p>Anyone telling me to write software using `cat >> foo.c` better come with some receipts.
this is a fun exercise. but I saw it a lot of times in big companies where people are too afraid to make changes. where also the open-closed principle comes from. something I don't like anymore in practice. because one has to get away from the fear of breaking things to maintain clean code, and clean architecture.
>And it produces source code that is eminently readable, because the text of the program recapitulates your train of thought – a kind of stream-of-consciousness literate programming.<p>Sorry, but what? This does not make any sense.