Author here, happy to answer questions!<p>Direct link to the project: <a href="https://github.com/asg017/sqlite-http" rel="nofollow">https://github.com/asg017/sqlite-http</a><p>Few other projects of interest:<p>- sqlite-html: <a href="https://github.com/asg017/sqlite-html" rel="nofollow">https://github.com/asg017/sqlite-html</a><p>- sqlite-lines: <a href="https://github.com/asg017/sqlite-lines" rel="nofollow">https://github.com/asg017/sqlite-lines</a><p>- various other extensions: <a href="https://github.com/nalgeon/sqlean" rel="nofollow">https://github.com/nalgeon/sqlean</a><p>And past HN threads:<p>- <a href="https://news.ycombinator.com/item?id=32335295" rel="nofollow">https://news.ycombinator.com/item?id=32335295</a><p>- <a href="https://news.ycombinator.com/item?id=32288165" rel="nofollow">https://news.ycombinator.com/item?id=32288165</a>
Side note: I love the attention that SQLite is getting now. It's so refreshing after the dark ages of misapplied technology: microservices, Docker, NoSQL, and countless front-end frameworks.
On a side note, Observable is stupendous. Interactive Network Graphs, Charts, Maps, Animations. Not sure why we don't see it used enough.<p>Mike Bostock himself is constantly adding latest work <a href="https://observablehq.com/@mbostock" rel="nofollow">https://observablehq.com/@mbostock</a>. Great learning resource.
I picked up some very neat new SQLite tricks from this post independent of the extension itself (which is very cool) - I didn't know about the "define" module which lets you create new table-valued functions that are defined as SQL queries themselves.
Would be interesting if this can do HTTP/1.1 pipelining. Need to take a closer look.<p>What makes me hesitate to look closer is that I almost always am doing some text processing after the HTTP response and before the SQL commands. Due to resource constraints, I do not want to store HTML cruft.<p>I mostly am doing<p>HTTP response --> text processing --> SQLite3<p>rather than<p>HTTP response --> SQLite3<p>However I also have a need for storing different combinations of HTTP request headers and values in a database. I currently use the UNIX filesystem as a database (one header per file, djb's envdir loads the headers into environment from a selected folder), but maybe I could use SQLite3. For unprocessed text, e.g., from pipelined DoH responses, I use tmux buffers as a temporary database. Then do something like<p><pre><code> HTTP responses --> tmux loadb /dev/stdin
tmux saveb -b b1 /dev/stdout|textprocessingutility --> ip-map.txt (append unique, i.e., "add unique")
</code></pre>
The ip-map.txt file gets loaded into memory of a localhost forward proxy.<p>Databases, such as NetBSD's db, djb's cdb, kdb+, or sqlite3 can help with the "append unique" step if the data gets big.<p>Note: Any JSON I retrieve is "text", not binary data. Most HTTP responses are HTML, sometimes with embedded JSON. Pipelined DoH is binary but I still use drill to print the packets as text. (When I finally learn ldns I will stop using drill.)
While this is really cool (and a feat of engineering no less), I'm always really concerned when someone suggests to make an HTTP call from their database server.<p>Many years ago my boss told me to make a scheduled task in Windows to execute a SQL query to make an HTTP call and I asked him why we couldn't just use crontab/cURL? His response: "cURL? Like from the 90s?"<p>Anyhow I didn't last very long. Got fired shortly thereafter.