Since its inception, computers were based on the model of a machine traversed by data that are processed like sausages in a mincer, as data was fed directly on hardware which had code assembled as physical wires.<p>However, the software part of the machines is made of the same material as the data, so they are much closer to what current programming languages seem to imply.<p>It is natural that they tend to converge, especially in Lisp-like languages, into a type of program where the structure has code and data intermingled, which organically co-evolve in the directions in which problem solving takes it.<p>Instead of a machine with a processing pipeline, a closer metaphor might be the phloem of plants, the living cells that transport food, and which form part of the plant's structure and grow along with it.<p>(BTW I've coined a name for that kind of self-rendered cells, processed in place as collections of data sharing the same structure; I call them <i>'wits'</i> - as in, <i>the minimum unit of meaning</i>; in parallel to bits as the minimum unit of information. And if you look closely, you'll begin to see that a lot of modern programming systems have them everywhere. So you may want to use this term 'a <i>wit</i>' to describe the concept explained in the article.)
Whenever I stumble on an article like that*, I can't stop wondering why do so many developers think that SQL is hard/clunky in comparaison?<p>*(any article where author seemingly rebuild a database engine from scratch with maybe 0,1% to 1% of features count present in any RDBMS including sqlite)
Another alternate is to use Excel.<p>The downsides being [compared to this one] :<p>- Excel is more bulky. Takes time to start.<p>- You are forced to use Excel's editor, and not any editor of your choice.
This is like Jupyter notebooks for clojure with UI elements. Certainly helpful for small applications.<p>Jupyter seems to have community support for other languages though[1] including clojure, so it would be nice to see how that compares.<p>[1] <a href="https://github.com/jupyter/jupyter/wiki/Jupyter-kernels">https://github.com/jupyter/jupyter/wiki/Jupyter-kernels</a>
I've had a similar experience using the sam [1] text editor in command line mode, basically like a REPL. Filtering and modifying data using sam's "structural regular expressions". Rob Pike, the author of sam, specifically points out in the linked paper that the command language is great for manipulating multi-line "records".<p>Add the ability to { nest { expressions } } + very flexible file system IO (e.g. pipe selection to an external command and return the result), and you have a really nice recursive querying language. I specifically like it because of how easy it is to refine the query/regex substitution when it didn't yield the expected result.<p>1: <a href="http://doc.cat-v.org/plan_9/4th_edition/papers/sam/" rel="nofollow">http://doc.cat-v.org/plan_9/4th_edition/papers/sam/</a> (In particular, scroll to the "Structural Regular Expressions" section.)
This is interesting.<p>I do wonder what the ergonomics of querying data is for people who are unfamiliar with your APIs, they may prefer to query the data with SQL or a cursor based API.<p>The volcano model for querying in SQL databases is useful but I've never implemented that. (Kind of hard to query about though on Google)<p><a href="https://www.computer.org/csdl/journal/tk/1994/01/k0120/13rRUwI5TRe" rel="nofollow">https://www.computer.org/csdl/journal/tk/1994/01/k0120/13rRU...</a>
you might be interested in <a href="https://youtu.be/HB5TrK7A4pI" rel="nofollow">https://youtu.be/HB5TrK7A4pI</a><p>"We Really Don't Know How to Compute!"- Gerald Sussman (2011) [1:04:18]
you could also just have a string tag field, avoiding some inference which need to be often overwritten<p>I use your typical boring spreadsheet and it's mostly just numbers and strings with a few sums for totals