Every popular RPC format I'm aware of supports dynamic schemas, they're just generally not used unless strictly necessary. E.g. Cap'n Proto has SchemaLoader (dynamically loading compiled schemas) and SchemaParser (parsing schema strings into generic objects): <a href="https://capnproto.org/cxx.html#dynamic-reflection" rel="nofollow">https://capnproto.org/cxx.html#dynamic-reflection</a>, protobuf has "Dynamic Message": <a href="https://developers.google.com/protocol-buffers/docs/reference/cpp/google.protobuf.dynamic_message" rel="nofollow">https://developers.google.com/protocol-buffers/docs/referenc...</a> , etc. Plus, literally anything you can compile can be done dynamically, so it's not like this is an inherent quality of <i>an encoding</i>, only at best <i>a specific tool</i>.<p>Broadly: I'm sorta curious about the details here, but I'll have to read more later. It seems to be claiming the world without a whole lot of evidence, and some incorrect claims, while being implemented as a fairly simple (often good!) serialized form which is also quite large (less good!). Also I have no idea why it's so obsessed with sort-ability. It's not like the encoded data as a whole can be meaningfully sorted, and per-value that sounds like "is big-endian"...<p>E.g. this is the serialized form of a pre-defined struct with a single field, like `{"age":20}`: <a href="https://docs.rs/no_proto/0.9.60/no_proto/format/index.html" rel="nofollow">https://docs.rs/no_proto/0.9.60/no_proto/format/index.html</a><p><pre><code> // [0, 0, 0, 0, 0, 6, 0, 0, 0, 26, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 20]
// [ root ptr, vtable, data]
</code></pre>
which is pretty far from protocol buffers': <a href="https://developers.google.com/protocol-buffers/docs/encoding" rel="nofollow">https://developers.google.com/protocol-buffers/docs/encoding</a><p><pre><code> 08 96 20
^-- 01 in sample
</code></pre>
This sort of waste in favor of simplicity seems to be spread throughout the encoding. I'll happily admit that I think NoProto's format is the easier of the two to read and understand by hand, but that's not generally why we choose binary encodings. It's possible the simplicity gives it a performance edge, but I strongly suspect that, in practice, that's influenced far more by the various implementations' internal details than the encoding itself.