I see a pretty major problem with this. It seems to depend on the order of key value pairs in object literals being defined. The order is not defined by the JSON or the ecmascript standards. So you can't really depend on the order of keys in a json object, unless you explicitly define some order (like alphabetical, for instance). I like the basic concept of compressing json to json but this is not a particularly good way to do it- since the order of those keys may not be preserved in round trips through json encoders and decoders in various languages.
Would like to emphasize that this is only really useful in environments where gzip is not available (as the OP notes)...some tests using the demo JSON (minified):<p>test.json = 285 bytes<p>test.rjson = 233 bytes (18%)<p>test.json.gz = 205 bytes (27%)<p>If you are able to bundle a RJSON parser, why not just bundle an existing, well understood/tested compression scheme such as <a href="http://stuartk.com/jszip/" rel="nofollow">http://stuartk.com/jszip/</a> or <a href="https://github.com/olle/lz77-kit/blob/master/src/main/js/lz77.js" rel="nofollow">https://github.com/olle/lz77-kit/blob/master/src/main/js/lz7...</a> instead?
> reduce JSON data size and network traffic when gzip isn't available. For example, in-browser 3D-modeling tools like Mydeco 3D-planner may process and send to server megabytes of JSON-data;<p>`Content-encoding: gzip` anyone?
Good idea but i think for those extreme cases where you really have use for this you might as well go with a specialized protocol, which can be based on json if you need it.<p>I had to do this for an application which streamed several hundreds of data points per second to a browser-client. Both data-size on the network and parsing time in the browser was my biggest issues. Since it was a html-app i had to use either JSON.parse or custom parsing written in javascript, the second option being too slow to be viable. I ended up with something based on almost only json-arrays and then the client would know what the different items in the array meant. With his example it would only look something this: [7, ["programming", "javascript"],["Peter", "Griffin", "Homer", "Simpson", "Hank", "Hill"]]<p>So in other words it's just enough overhead to make it parsable by json.parse but otherwise you only transfer the actual data.<p>Note that I wouldn't recommend this option unless you really hit a wall where you realize that this in fact is a bottleneck.
If your HTTP connection is going to be gzip compressed, then manual compression of this kind is not guaranteed to reduce the size of the final result, and may in fact hurt it.
How would it compress and then decompress the following document?<p><pre><code> {"users": [
{"first": "Homer", "last": "Simpson"},
{"first": "Hank", "last": "Hill"},
[2, "Peter", "Griffin"]
}</code></pre>
I wondered if it was April Fools day... Really, JSON is lightweight enough that it doesn't need compressing. We're not dealing with XML here! This just adds another point of failure.
If you just use GZIP, supported by most browsers, it compresses without the need of special software on the client side to rebuild the original JSON response. And does a better job at compression I bet. I thought of doing something like this to rebuild cycles in object graphs, but I didn't because it requires special parsing logic for the client to use.
similar to JSONH, which i use to pass recordsets back and forth. there's no recursiveness for nested stuff though.<p><a href="https://github.com/WebReflection/JSONH" rel="nofollow">https://github.com/WebReflection/JSONH</a>
It's all fun until someone loses an eye.<p>What if i look at your API output and assume it's json (only got unique items) but it's rjson? Or whatever?<p>The most important thing when adding another layer to a protocol is identification.<p>So, please, put the whole altered object into a rjson root node so it's clear what we're dealing with.