TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Nimble: A new columnar file format by Meta [video]

121 点作者 aduffy大约 1 年前

11 条评论

CharlesW大约 1 年前
I learned that &quot;Nimble&quot; is the new name for &quot;Alpha&quot;, discussed in this 2023 report: <a href="https:&#x2F;&#x2F;www.cidrdb.org&#x2F;cidr2023&#x2F;papers&#x2F;p77-chattopadhyay.pdf" rel="nofollow">https:&#x2F;&#x2F;www.cidrdb.org&#x2F;cidr2023&#x2F;papers&#x2F;p77-chattopadhyay.pdf</a><p>Here&#x27;s an excerpt that may save some folks a click or three…<p>&gt; <i>&quot;While storing analytical and ML tables together in the data lakehouse is beneficial from a management and integration perspective, it also imposes some unique challenges. For example, it is increasingly common for ML tables to outgrow analytical tables by up to an order of magnitude. ML tables are also typically much wider, and tend to have tens of thousands of features usually stored as large maps.</i><p>&gt; <i>&quot;As we executed on our codec convergence strategy for ORC, it gradually exposed significant weaknesses in the ORC format itself, especially for ML use cases. The most pressing issue with the DWRF format was metadata overhead; our ML use cases needed a very large number of features (typically stored as giant maps), and the DWRF map format, albeit optimized, had too much metadata overhead. Apart from this, DWRF had several other limitations related to encodings and stripe structure, which were very difficult to fix in a backward-compatible way. Therefore, we decided to build a new columnar file format that addresses the needs of the next generation data stack; specifically, one that is targeted from the onset towards ML use cases, but without sacrificing any of the analytical needs.</i><p>&gt; <i>&quot;The result was a new format we call Alpha. Alpha has several notable characteristics that make it particularly suitable for mixed Analytical nd ML training use cases. It has a custom serialization format for metadata that is significantly faster to decode, especially for very wide tables and deep maps, in addition to more modern compression algorithms. It also provides a richer set of encodings and an adaptive encoding algorithm that can smartly pick the best encoding based on historical data patterns, through an encoding history loopback database. Alpha requires fewer streams per column for many common data types, making read coalescing much easier and saving I&#x2F;Os, especially for HDDs. Alpha was written in modern C++ from scratch in a way that allows it to be extended easily in the future.</i><p>&gt; <i>&quot;Alpha is being deployed in production today for several important ML training applications and showing 2-3x better performance than ORC on decoding, with comparable encoding performance and file size.&quot;</i>
评论 #39995776 未加载
评论 #39997103 未加载
评论 #40029077 未加载
RyanHamilton大约 1 年前
Parquet + Arrow hopefully seem to be emerging as standards. I would much rather see those standards improved than new formats emerge. Even within those existing formats there has become enough variation than some platforms only support a subset of functionality. That and the performance and size of the libraries is poor.<p>e.g. DuckDB &#x2F; Clickhouse Parquet nanosecond compatibility. <a href="https:&#x2F;&#x2F;github.com&#x2F;duckdb&#x2F;duckdb&#x2F;issues&#x2F;9852">https:&#x2F;&#x2F;github.com&#x2F;duckdb&#x2F;duckdb&#x2F;issues&#x2F;9852</a> e.g. The arrow SQL driver is 70+MB in java.
评论 #40024237 未加载
jauntywundrkind大约 1 年前
There&#x27;s already been some interesting column format optimization work at Meta, as their Velox execution engine team worked with Apache Arrow to align their columnar formats. This talk is actually happening at VeloxCon, so there&#x27;s got to be some awareness! <a href="https:&#x2F;&#x2F;engineering.fb.com&#x2F;2024&#x2F;02&#x2F;20&#x2F;developer-tools&#x2F;velox-apache-arrow-15-composable-data-management&#x2F;" rel="nofollow">https:&#x2F;&#x2F;engineering.fb.com&#x2F;2024&#x2F;02&#x2F;20&#x2F;developer-tools&#x2F;velox-...</a> <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=39454763">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=39454763</a><p>I wonder how much if any overlap there is here, and whether it was intentional or accidentally similar. Ah, &quot;return efficient Velox vectors&quot; is on the list, but still seems likely to be some overlap in encoding strategies etc.<p>The four main points seem to be: a) encoding metadata as part of stream rather than fixed metadata, b) nls are just another encoding, c) no stripe footer&#x2F;only stream locations is in footer, d) FlatBuffers! Shout out to FlatBuffers, wasn&#x27;t expecting to see them making a comeback!<p>I do wish there were a lot more diagrams&#x2F;slides. There&#x27;s four bullet points, and Yoav Helfman talks to them, but there&#x27;s not a ton of showing what he&#x27;s talking about.
评论 #39999417 未加载
评论 #39997672 未加载
horusporus大约 1 年前
I was really hoping to see Cap&#x27;N Proto used for the format, since that has fast access without decoding, and reasonable backwards compatibility with old files. Anyone know why Flatbuffers were used?
mempko大约 1 年前
I would love to see support in Apache Arrow to read this format. Parquet is already supported.
评论 #39996944 未加载
MaximilianEmel大约 1 年前
Is there a quick description of the structure of it anywhere?
评论 #39996939 未加载
mrtimo大约 1 年前
How is this compare with parquet format?
Kalanos大约 1 年前
By the time data has been preprocessed for ML, it is numerically encoded as floats, so .npy&#x2F;npz is a good fit and `np.memmap` is an incredible way to seek into ndim data.
yigitkonur35大约 1 年前
Curious about Clickhouse’s approach to this compression structure.
gigatexal大约 1 年前
Hmm another conte for in the open table format space. Nice.
评论 #40003565 未加载
khaledh大约 1 年前
Fwiw, the name clashes with Nim&#x27;s package manager nimble: <a href="https:&#x2F;&#x2F;github.com&#x2F;nim-lang&#x2F;nimble">https:&#x2F;&#x2F;github.com&#x2F;nim-lang&#x2F;nimble</a>
评论 #39995679 未加载