TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Analyzing multi-gigabyte JSON files locally

215 pointsby bubblehack3rabout 2 years ago

35 comments

jmmvabout 2 years ago
Some random comments:<p>* A few GBs of data isn&#x27;t really that much. Even &#x2F;considering&#x2F; the use of cloud services just for this sounds crazy to me... but I&#x27;m sure there are people out there that believe it&#x27;s the only way to do this (not the author, fortunately).<p>* &quot;You might find out that the data doesn’t fit into RAM (which it well might, JSON is a human-readable format after all)&quot; -- if I&#x27;m reading this right, the author is saying that the parsed data takes _more_ space than the JSON version? JSON is a text format and interning it into proper data structures is likely going to take _less_ space, not more.<p>* &quot;When you’re ~trial-and-error~iteratively building jq commands as I do, you’ll quickly grow tired of having to wait about a minute for your command to succeed&quot; -- well, change your workflow then. When tackling new queries, it&#x27;s usually a good idea to reduce the data set. Operate on a few records until you have the right query so that you can iterate as fast as possible. Only once you are confident with the query, run it on the full data.<p>* Importing the data into a SQLite database may be better overall for exploration. Again, JSON is slow to operate on because it&#x27;s text. Pay the cost of parsing only once.<p>* Or write a custom little program that streams data from the JSON file without buffering it all in memory. JSON parsing libraries are plentiful so this should not take a lot of code in your favorite language.
评论 #35210738 未加载
评论 #35213450 未加载
评论 #35210555 未加载
评论 #35214333 未加载
评论 #35215259 未加载
评论 #35211433 未加载
评论 #35215188 未加载
评论 #35212360 未加载
评论 #35215145 未加载
评论 #35210576 未加载
评论 #35210549 未加载
评论 #35210671 未加载
评论 #35215666 未加载
ddulaneyabout 2 years ago
I really like using line-delimited JSON [0] for stuff like this. If you&#x27;re looking at a multi-GB JSON file, it&#x27;s often made of a large number of individual objects (e.g. semi-structured JSON log data or transaction records).<p>If you can get to a point where each line is a reasonably-sized JSON file, a lot of things gets way easier. jq will be streaming by default. You can use traditional Unixy tools (grep, sed, etc.) in the normal way because it&#x27;s just lines of text. And you can jump to any point in the file, skip forward to the next line boundary, and know that you&#x27;re not in the middle of a record.<p>The company I work for added line-delimited JSON output to lots of our internal tools, and working with anything else feels painful now. It scales up really well -- I&#x27;ve been able to do things like process full days of OPRA reporting data in a bash script.<p>[0]: <a href="https:&#x2F;&#x2F;jsonlines.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;jsonlines.org&#x2F;</a>
评论 #35215850 未加载
评论 #35229366 未加载
评论 #35214073 未加载
jahewsonabout 2 years ago
I had to parse a database backup from Firebase, which was, remarkably, a 300GB JSON file. The database is a tree rooted at a single object, which means that any tool that attempts to stream individual objects always wanted to buffer this single 300GB root object. It wasn’t enough to strip off the root either, as the really big records were arrays a couple of levels down, with a few different formats depending on the schema. For added fun our data included some JSON serialised inside strings too.<p>This was a few years ago and I threw every tool and language I could at it, but they were either far too slow or buffered records larger than memory, even the fancy C++ SIMD parsers did this. I eventually got something working in Go and it was impressively fast and ran on my MacBook, but we never ended up using it as another engineer just wrote a script that read the entire database from the Firebase API record-by-record throttled over several days, lol.
评论 #35215523 未加载
评论 #35214614 未加载
评论 #35212892 未加载
评论 #35216303 未加载
评论 #35214426 未加载
isoprophlexabout 2 years ago
Nice writeup, but is jq &amp; GNU parallel or a notebook full of python spaghetti the best (least complex) tool for the job?<p>DuckDB might be nice here, too. See <a href="https:&#x2F;&#x2F;duckdb.org&#x2F;2023&#x2F;03&#x2F;03&#x2F;json.html" rel="nofollow">https:&#x2F;&#x2F;duckdb.org&#x2F;2023&#x2F;03&#x2F;03&#x2F;json.html</a>
评论 #35211441 未加载
评论 #35214153 未加载
评论 #35218520 未加载
评论 #35211486 未加载
hamilyon2about 2 years ago
Clickhouse is the best way to analyze 10GB sized json by far.<p>Latest bunch of features add near-native json support. Coupled with ability to add extracted columns make the whole process easy. It is fast, you can use familiar SQL syntax, not constrainted to RAM limits.<p>It is a bit hard if you want to iteratively process file line-by line or use advanced SQL. And you have one-time cost of writing schema. Apart from that, I can&#x27;t think of any downsides.<p>Edit: clarify a bit
评论 #35210887 未加载
jeffbeeabout 2 years ago
One thing that will greatly help with `jq` is rebuilding it so it suits your machine. The package of jq that comes with Debian or Ubuntu Linux is garbage that targets k8-generic (on the x86_64 variant), is built with debug assertions, and uses the GNU system allocator which is the worst allocator on the market. Rebuilding it targeting your platform, without assertions, and with tcmalloc makes it twice as fast in many cases.<p>On this 988MB dataset I happen to have at hand, compare Ubuntu jq with my local build, with hot caches on an Intel Core i5-1240P.<p><pre><code> time parallel -n 100 &#x2F;usr&#x2F;bin&#x2F;jq -rf ..&#x2F;program.jq ::: * -&gt; 1.843s time parallel -n 100 ~&#x2F;bin&#x2F;jq -rf ..&#x2F;program.jq ::: * -&gt; 1.121s </code></pre> I know it stinks of Gentoo, but if you have any performance requirements at all, you can help yourself by rebuilding the relevant packages. Never use the upstream mysql, postgres, redis, jq, ripgrep, etc etc.
评论 #35211460 未加载
rvanlaarabout 2 years ago
Recently had 28GB json of IOT data with no guarantees on the data structure inside.<p>Used simdjson [1] together with python bindings [2]. Achieved massive speedups for analyzing the data. Before it was in the order of minutes, then it became fast enough to not leave my desk. Reading from disk became the bottleneck, not cpu power and memory.<p>[1] <a href="https:&#x2F;&#x2F;github.com&#x2F;simdjson&#x2F;simdjson">https:&#x2F;&#x2F;github.com&#x2F;simdjson&#x2F;simdjson</a> [2] <a href="https:&#x2F;&#x2F;pysimdjson.tkte.ch&#x2F;" rel="nofollow">https:&#x2F;&#x2F;pysimdjson.tkte.ch&#x2F;</a>
评论 #35221038 未加载
Groxxabout 2 years ago
tbh my usual strategy is to drop into a real programming language and use whatever JSON stream parsing exists there, and dump the contents into a half-parsed file that can be split with `split`. Then you can use &quot;normal&quot; tools on one of those pieces for fast iteration, and simply `cat * | ...` for the final slow run on all the data.<p>Go is quite good for this, as it&#x27;s extremely permissive about errors and structure, has very good performance, and comes with a streaming parser in the standard library. It&#x27;s pretty easy to be finished after only a couple minutes, and you&#x27;ll be bottlenecked on I&#x2F;O unless you did something truly horrific.<p>And when jq isn&#x27;t enough because you need to do joins or something, shove it into SQLite. Add an index or three. It&#x27;ll massively outperform almost anything else unless you need rich text content searches (and even then, a fulltext index might be just as good), and it&#x27;s plenty happy with a terabyte of data.
zeitlupeabout 2 years ago
Spark is my favorite tool to deal with jsons. It can read as many jsons – in any format located in any even nested folder structure – as you want, offers parallelization, and is great to flatten structs. I&#x27;ve never run into memory issues (or never ran out of workarounds) so far.
评论 #35212232 未加载
评论 #35214823 未加载
19habout 2 years ago
To analyze and process the pushshift Reddit comment &amp; submission archives we used Rust with simd-json and currently get to around 1 - 2GB&#x2F;s (that’s including the decompression of the zstd stream). Still takes a load of time when the decompressed files are 300GB+.<p>Weirdly enough we ended up networking a bunch of Apple silicon MacBooks together as the Ryzen 32C servers didn’t even closely match its performance :&#x2F;
评论 #35214219 未加载
评论 #35214083 未加载
ginkoabout 2 years ago
This is something I did recently. We have this binary format we use for content traces. You can dump it to JSON, but that turns a ~10GB into a ~100GB file. I needed to check some aspects of this with Python, so I used ijson[1] to parse the JSON without having to keep it in memory.<p>The nice thing is that our dumping tool can also output JSON to STDOUT so you don&#x27;t even need to dump the JSON representation to the hard disk. Just open the tool in a subprocess and pipe the output to the ijson parser. Pretty handy.<p>[1] <a href="https:&#x2F;&#x2F;pypi.org&#x2F;project&#x2F;ijson&#x2F;" rel="nofollow">https:&#x2F;&#x2F;pypi.org&#x2F;project&#x2F;ijson&#x2F;</a>
version_fiveabout 2 years ago
For a hacky solution, I&#x27;ve often just used grep, tr, awk, etc. If it&#x27;s a well structured file and all the records are the same or similar enough, it&#x27;s often possible to grep your way into getting the thing you want on each line, and then use awk or sed to parse out the data. Obviously lots of ways this can break down, buy 9GB is nothing if you can make it work with these tools. I have found jq much slower.
评论 #35210393 未加载
chrisweeklyabout 2 years ago
LNAV (<a href="https:&#x2F;&#x2F;lnav.org" rel="nofollow">https:&#x2F;&#x2F;lnav.org</a>) is ideally suited for this kind of thing, with an embedded sqlite engine and what amounts to a local laptop-scale mini-ETL toolkit w&#x2F; a nice CLI. I&#x27;ve been recommending it for the last 7 years since I discovered this awesome little underappreciated util.
Nihilartikelabout 2 years ago
If you&#x27;re doing interactive analysis, converting the json to parquet is a great first step.. After that duckdb or spark are a good way to go. I only fall back to spark if some aggregations are too big to fit in RAM. Spark spills to disk and subdivides the physical plans better in my experience..
评论 #35210829 未加载
评论 #35210838 未加载
mastaxabout 2 years ago
Dask looks really cool, I hope I remember it exists next time I need it.<p>I&#x27;ve been pretty baffled, and disappointed, by how bad Python is at parallel processing. Yeah, yeah, I know: The GIL. But so much time and effort has been spent engineering around every other flaw in Python and yet this part is still so bad. I&#x27;ve tried every &quot;easy to use&quot; parallelism library that gets recommended and none of them has satisfied. Always: &quot;couldn&#x27;t pickle this function&quot; or spawning loads of processes that use up all my RAM for no visible reason but don&#x27;t use any CPU or make any indication of progress. I&#x27;m sure I&#x27;m missing something, I&#x27;m not a Python guy. But every other language I&#x27;ve used has an easy to use stateless parallel map that hasn&#x27;t given me any trouble.
评论 #35214143 未加载
评论 #35210844 未加载
评论 #35217008 未加载
thakoppnoabout 2 years ago
Would sampling the JSON down to 20MB and running jq experimentally until one has found an adequate solution be a decent alternative approach?<p>It depends on the dataset one supposes.
评论 #35210300 未加载
tylerhannanabout 2 years ago
There was an interesting article on this recently...<p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=31004563" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=31004563</a><p>It prompted quite some conversation and discussion and, in the end, an updated benchmark across a variety of tools <a href="https:&#x2F;&#x2F;colab.research.google.com&#x2F;github&#x2F;dcmoura&#x2F;spyql&#x2F;blob&#x2F;master&#x2F;notebooks&#x2F;json_benchmark.ipynb" rel="nofollow">https:&#x2F;&#x2F;colab.research.google.com&#x2F;github&#x2F;dcmoura&#x2F;spyql&#x2F;blob&#x2F;...</a> conveniently right in the 10GB dataset size.
berkle4455about 2 years ago
Just use clickhouse-local or duckdb. Handling data measured in terabytes is easy.
cpuguy83about 2 years ago
Jq does support slurp mode so you should be able to do this using that... granted I&#x27;ve never attempted this and the syntax is very different.<p>--- edit ---<p>I used the wrong term, the correct term is streaming mode.
评论 #35211310 未加载
funstuff007about 2 years ago
Anyone who&#x27;s generating multi-GB JSON files on purpose has some explaining to do.
评论 #35212837 未加载
hprotagonistabout 2 years ago
i would seriously consider sqlite-utils here.<p><a href="https:&#x2F;&#x2F;sqlite-utils.datasette.io&#x2F;en&#x2F;stable&#x2F;cli.html" rel="nofollow">https:&#x2F;&#x2F;sqlite-utils.datasette.io&#x2F;en&#x2F;stable&#x2F;cli.html</a>
评论 #35210210 未加载
cube2222about 2 years ago
OctoSQL[0] or DuckDB[1] will most likely be much simpler, while going through 10 GB of JSON in a couple seconds at most.<p>Disclaimer: author of OctoSQL<p>[0]: <a href="https:&#x2F;&#x2F;github.com&#x2F;cube2222&#x2F;octosql">https:&#x2F;&#x2F;github.com&#x2F;cube2222&#x2F;octosql</a><p>[1]: <a href="https:&#x2F;&#x2F;duckdb.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;duckdb.org&#x2F;</a>
reegnzabout 2 years ago
Allow me to advertise my zsh jq plugin +jq-repl: <a href="https:&#x2F;&#x2F;github.com&#x2F;reegnz&#x2F;jq-zsh-plugin">https:&#x2F;&#x2F;github.com&#x2F;reegnz&#x2F;jq-zsh-plugin</a><p>I find that for big datasets choosing the right format is crucial. Using json-lines format + some shell filtering (eg. head, tail to limit the range, egrep or ripgrep for the more trivial filtering) to reduce the dataset to a couple of megabytes, then use that jq-repl of mine to iterate fast on the final jq expression.<p>I found that the REPL form factor works really well when you don&#x27;t exactly know what you&#x27;re digging for.
UnCommonLispabout 2 years ago
Use ClickHouse, either clickhouse-server or clickhouse-local. No fuss, no muss.
mattewongabout 2 years ago
If it could be tabular in nature, maybe convert to sqlite3 so you can make use of indexing, or CSV to make use of high-performance tools like xsv or zsv (the latter of which I&#x27;m an author).<p><a href="https:&#x2F;&#x2F;github.com&#x2F;liquidaty&#x2F;zsv&#x2F;blob&#x2F;main&#x2F;docs&#x2F;csv_json_sqlite.md">https:&#x2F;&#x2F;github.com&#x2F;liquidaty&#x2F;zsv&#x2F;blob&#x2F;main&#x2F;docs&#x2F;csv_json_sql...</a><p><a href="https:&#x2F;&#x2F;github.com&#x2F;BurntSushi&#x2F;xsv">https:&#x2F;&#x2F;github.com&#x2F;BurntSushi&#x2F;xsv</a>
code-fasterabout 2 years ago
&gt; Also note that this approach generalizes to other text-based formats. If you have 10 gigabyte of CSV, you can use Miller for processing. For binary formats, you could use fq if you can find a workable record separator.<p>You can also generalize it without learning a new minilanguage by using <a href="https:&#x2F;&#x2F;github.com&#x2F;tyleradams&#x2F;json-toolkit">https:&#x2F;&#x2F;github.com&#x2F;tyleradams&#x2F;json-toolkit</a> which converts csv&#x2F;binary&#x2F;whatever to&#x2F;from json
Animatsabout 2 years ago
Rust&#x27;s serde-json will iterate over a file of JSON without difficulty, and will write one from an iterative process without building it all in memory. I routinely create and read multi-gigabyte JSON files. They&#x27;re debug dumps of the the scene my metaverse viewer is looking at.<p>Streaming from large files was routine for XML, but for some reason, JSON users don&#x27;t seem to work with streams much.
liammclennanabout 2 years ago
Flare’s (<a href="https:&#x2F;&#x2F;blog.datalust.co&#x2F;a-tour-of-seqs-storage-engine&#x2F;" rel="nofollow">https:&#x2F;&#x2F;blog.datalust.co&#x2F;a-tour-of-seqs-storage-engine&#x2F;</a>) command line tool can query CLEF formatted (new-line delimited) JSON files and is perhaps an order of magnitude faster.<p>Good for searching and aggregating. Probably not great for transformation.
DeathArrowabout 2 years ago
You can deserialize the JSONs and filter the resulting arrays or lists. For C# the IDE can automatically generate the classes from JSON and I think there are tools for other languages to generate data structures from JSON.
maCDzPabout 2 years ago
I like SQLite and JSON columns. I wonder how fast it would be if you save the whole JSON file in one record and then query SQLite. I bet it’s fast.<p>You could probably use that one record to then build tables in SQLite that you can query.
kosherhurricaneabout 2 years ago
What I would have done is first create a map of the file, just the keys and shapes, without the data. That way I can traverse the file. And then mmap the file to traverse and read the data.<p>A couple of dozen lines of code would do it.
2habout 2 years ago
Note the Go standard library has a streaming parser:<p><a href="https:&#x2F;&#x2F;go.dev&#x2F;play&#x2F;p&#x2F;O2WWn0qQrP6" rel="nofollow">https:&#x2F;&#x2F;go.dev&#x2F;play&#x2F;p&#x2F;O2WWn0qQrP6</a>
kashifabout 2 years ago
Might be useful for some - <a href="https:&#x2F;&#x2F;github.com&#x2F;kashifrazzaqui&#x2F;json-streamer">https:&#x2F;&#x2F;github.com&#x2F;kashifrazzaqui&#x2F;json-streamer</a>
zeropabout 2 years ago
Other day i discovered duckdb on HN which allows firing SQL on JSON. But i am not sure if that can take this much volume of data.
nn3about 2 years ago
the real trick is to do the debugging&#x2F;exploration on a small subset of the data. Then usually you don&#x27;t need all these extra measures because the real processing is only done a small number of times.