TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Show HN: ffjson: faster json serialization in Go

83 点作者 pquerna大约 11 年前

11 条评论

haberman大约 11 年前
As someone who has been working on parsing&#x2F;serialization for many years, I can absolutely confirm that generating schema-specific code will beat generic code almost every time.<p>The article discovers an important tradeoff: speed vs. convenience. Generated code is faster but less convenient, because it adds steps to your compile. And you pay this compile-time overhead for every message type you want to manipulate. The pain of this generated code was one of Kenton Varda&#x27;s motivations for creating Cap&#x27;n Proto after working on Protocol Buffers for years. Unlike Protocol Buffers, Cap&#x27;n Proto doesn&#x27;t need to generate parsing&#x2F;serialization code because its serialization format also works as an in-memory format.<p>I have taken a somewhat different approach to the problem, with my Protocol Buffer &#x2F; JSON &#x2F; etc. serialization framework upb (<a href="https://github.com/haberman/upb" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;haberman&#x2F;upb</a>). Instead of using static code generation, I use a JIT approach where I generate specialized code on-the-fly. This approach is particularly important when you are wrapping the library in an interpreted language like Python&#x2F;Ruby&#x2F;Lua&#x2F;JavaScript, because users of these languages don&#x27;t have a compile cycle at all, so adding one is a large inconvenience.<p>My library isn&#x27;t ready for prime time but I&#x27;m hoping to have something ready to use this year.
评论 #7502811 未加载
评论 #7503645 未加载
评论 #7502745 未加载
hierro大约 11 年前
This is way more complicated that it needs to be. Use code.google.com&#x2F;p&#x2F;go.tools&#x2F;go&#x2F;types to process the AST and you&#x27;ll get basically the same information that the compiler sees. With that you can generate the code pretty easily. For comparison, our JSON code generator implementation is just ~350 lines and supports having different JSON representations for the same type, which vary depending on the container type.<p>Also, if you want to make the serialization faster you need to understand exactly what makes encoding&#x2F;json slow (hint: is not only reflect) and remove all reasonable bottlenecks. You state that megajson does not support the MarshallJSON interface like that&#x27;s a bad thing, but I&#x27;m pretty sure that&#x27;s deliberate because it&#x27;s indeed a feature. When encoding&#x2F;json encounters a type which implements MarshalJSON it does the following:<p>1 - Call MarshallJSON to obtain its JSON representation as []byte 2 - Validate the produced JSON using a slower-than-the-bad&#x27;s-guy-horse function based state machine which processes each character individually 3 - Copy the []byte returned by MarshallJSON to its own buffer<p>Unsurprisingly (after reading encoding&#x2F;json&#x27;s code, of course) having a MarshalJSON method is way slower than letting encoding&#x2F;json use reflection if the JSON you&#x27;re generating is anything but trivial and without almost any nesting, because it avoids extra allocations, copies and the validation step.
chimeracoder大约 11 年前
Shameless plug, but this looks like the exact inverse of gojson[0], which generates code (struct definitions) based on sample JSON.<p>I originally wrote it when writing a client library to interface with a third-party API; it saves a lot of effort compared to typing out struct definitions manually, and a lot of type assertions compared to using map[string]interface{} everywhere.<p>[0] <a href="https://github.com/ChimeraCoder/gojson" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;ChimeraCoder&#x2F;gojson</a>
hannibalhorn大约 11 年前
The implementation of this is pretty interesting, in that it generates code that imports your code, compiles it, and then uses reflection to generate the serialization code. And in the end, that worked out better for the author than using the AST.
评论 #7502316 未加载
AYBABTME大约 11 年前
I&#x27;ve made a library this weekend that doesn&#x27;t need code generation to achieve 2x improvement[1] over the standard library.<p>While the OP only implemented the encoding part, I only implemented the decoding part =):<p><a href="https://github.com/aybabtme/fatherhood" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;aybabtme&#x2F;fatherhood</a><p>So I guess they overlap nicely in that.<p>[1]:<a href="https://github.com/aybabtme/fatherhood#performance" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;aybabtme&#x2F;fatherhood#performance</a>
United857大约 11 年前
Looking at the name, at first I thought ffmpeg author and general programming god Fabrice Bellard had come up with it.
kevrone大约 11 年前
Seems like this approach might also work for building type-specific collection implementations as well!
mwsherman大约 11 年前
On the .Net side, Jil is doing similar, creating a custom static serializer per type. It’s able to do the code generation at runtime by emitting IL: <a href="https://github.com/kevin-montrose/Jil" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;kevin-montrose&#x2F;Jil</a>
eikenberry大约 11 年前
You can get near the same speedup by just avoiding reflection. You unmarshal into an interface{} then pull the data out manually using type assertions as necessary. In my last project I think I got about a 1.6-1.7 speedup this way.
评论 #7504991 未加载
otterley大约 11 年前
Good stuff, there.<p>Feature request: optionally emit canonicalized (key-sorted) JSON.
knodi大约 11 年前
This is a good example of why everyone should avoid using reflect package as much as possible.<p>I use reflect for quick development and then remove it before production roll out.
评论 #7502391 未加载