XML, being in the markup family tree, has a lot more history then simple json encoding ... measuring its usefulness on a corner case has always been well ... boring. I am glad people are using JSON to sling simple data across the web versus markup.<p>Come back to me when you are using json to encode an entire document ... you might look at XML a bit differently.<p>tl;dr use the right tool for the right job.
Like many, I feel that the poster is throwing the baby with the bathwater. Yes, SOAP and XML Schema are horrible. Don't use them, then. Yes, XML is verbose, but that's exactly why Relax NG has a compact syntax. Use that if you don't like the XML syntax. Yes, data can be expressed as attributes or elements, but there are simple rules of thumb to decide between one or the other: if your data can have structure, or you may want to have multiple instances of the same thing, it's generally better to use an element; otherwise an attribute should do the trick.<p>There are also errors and approximations: XML did not introduce the bracket syntax, it inherited it from SGML. A DTD is not a schema (and if you want to criticize XML, you should point out that it should <i>not</i> have inherited DTDs from SGML.) He doesn't even mention the worst part about comments, which is that you can't have -- inside a comment (very annoying when commenting a large block of data...)<p>XML has many beautiful applications, like SVG, SMIL (which never took off but keeps getting rediscovered/reimplemented in an inconsistent manner [full disclosure: I participated in the SMIL and CDF W3C working groups]), XSLT, &c. XHTML was not perfect by a long stretch but the new HTML5 syntax is much, much worse.<p>Use XML, JSON, and whatever is necessary to get the job done. For the project that I am working on right now, I am using XML for serializing Web app descriptions; in <i>this</i> situation, XML is clearly better than JSON.
the main arguments are about syntax. there's no mention of namespaces and schema are dismissed because the author didn't use them. no mention of tools for automated processing.<p>this is not a very good article, in all honesty. he doesn't like the syntax, but doesn't seem to consider that different technologies can be suited to different problems, or that he simply hasn't experienced the kind of uses where xml works well.
XML can do the same trick with indirect objects as COS, using ID and IDREF type attributes. A number of years ago I was dealing with the archival and retrieval/display of enormous medical textbooks in XML, and I couldn't efficiently pull out arbitrary elements (chapters, sections, paragraphs, etc) because of the hierarchical nature of the XML document structure. I had to parse the whole thing to use an XPATH to get the element I needed, and that took too long. (My parser could handle 3MB/sec, and some of these books were over 100+MB.)<p>The solution I came up with was a program that transformed the documents by flattening them into a relatively small hierarchical structure that represented the volume/chapter/section headings of the book, and a flat list of elements that were small enough to parse quickly. I inserted ID and IDREF elements to link these two parts together, and created an external index of the file offsets and lengths of each element in the flattened list. That let me use simple file I/O to access any element by ID, pull it out of the larger file, and only then start the parsing engine.<p>It was like the article mentions: my XML file, together with the external index (in a simple Unix DBFile file) was a miniature NoSQL database of the textbook.<p>BTW, this predated the "NoSQL" label, and was developed after testing of Oracle and the XML databases of the day completely failed to meet performance and scalability requirements. My solution has an infinite capacity to scale; its performance is not impacted by the number of books in the system nor their size. The retrieval and display time of a single chapter or any subelement is a constant proportional only to the size of that chapter, and is not affected by the overall size of the collection. All of the other solutions we looked at got slower as the number of books increased and as the size of the books increased. (I mention this only to head-off any comments about reinventing the wheel.)