This uses std::string all over the place, which allocates memory under the hood.<p>I'm not sure how any JSON parser could avoid memory allocation without a difficult to use interface. Numbers, arrays, strings, and objects are unbounded by the JSON spec, so a truly malloc free library would need to provide a kind of streaming interfaces where things are returned in fixed sized chunks.<p>JSON wouldn't be my first choice for data storage in situations where I needed to avoid dynamic memory.
> As of today, all json parsers rely dynamic data structures to parse and store the json objects.<p>I'm not sure that's entirely fair. Callback-based parsers like YAJL leave the application free to store the data in whatever data structure they want, or even to stream-process the input without storing in a data structure at all.<p>But regardless, the meta-programming approach described here is interesting and novel. Generating structure-specific parsing code is a well-explored area (for example, Protocol Buffers is designed entirely around this idea), but doing it as C++ metaprogramming is a novel approach (Protocol Buffers relies on compile-time code generation).<p>I don't actually understand how the object inspection and compile-time codegen works with this meta-programming approach; will be interesting to dig in a little deeper and learn more.
On one hand this is undoubtedly a very clever use of ++ features. On the other hand that <i>heck of a lot</i> of scaffolding (just look in /iod directory) and the more scaffolding there is, the more caution is needed in adopting the code.<p>The same goal - not parsing what's not needed - can be done with a conventional callback-based C code. You basically go through the json data, parse, say, a field and call the app asking "is this ok? shall I proceed?". If it's a yes, then you indeed proceed and parse out the value chunk and pass it to the app the same way. If it's a no, you either abort or skip over the value. The end effect is the same - parsing of an invalid json input is aborted as soon as the app flags the first bad entry; and unwanted fields are never parsed in full.<p>So I seriously doubt that this is a little more than a marketing spin of a proud developer -<p><pre><code> This makes its performances impossible to match
in other languages such as C or Java that do not
provide static introspection.
</code></pre>
I am fairly certain that vurtun's code [1] can match and most likely beat this lib's performance, with ease.<p>[1] <a href="https://news.ycombinator.com/item?id=8609236" rel="nofollow">https://news.ycombinator.com/item?id=8609236</a>
See also: Scala Pickling [1]. Serialization and deserialisation logic optimised for specific datatype is generated purely at compile time using Scala Macros [2].<p>[1] <a href="http://lampwww.epfl.ch/~hmiller/pickling/" rel="nofollow">http://lampwww.epfl.ch/~hmiller/pickling/</a><p>[2] <a href="http://scalamacros.org" rel="nofollow">http://scalamacros.org</a>
> This makes its performances impossible to match in other languages such as C or Java that do not provide static introspection.<p>A CHALLENGE!<p>So, er, who's up for it?<p>You could implement an analogue of this approach in Java. It's true that Java doesn't have language constructs that would let you do this as part of compilation, but Java has its ways. You could write an annotation processor to do this at compile time, or use a bytecode parser at runtime (this is yucky, but a fairly standard technique these days). Either way, the output would be a pair of synthetic classes which implemented the parser and encoder. A tool like this would be moderately laborious to write, but a straightforward matter of programming.
RapidJSON claims to support "in-situ parsing", which is presumably mostly zero copy, and presumably doesn't allocate much either. I'd like to see benchmarks over comparable code.