I love listening to young developers <i>guess</i> at the history of XML, and why it was "complex" (it wasn't), and then turn around an reinvent that wheel, with every bit of complexity that they just said they didn't like... because it's necessary.<p>So a bit of history from someone who was already developing for over a decade when XML was <i>the new hotness:</i><p>The before times were bad. Really bad. Everybody and everything had their own text-based formats.[1] I don't just mean a few minor variants of INI files. I mean wildly different formats in <i>different character encodings</i>, which were literally <i>never provided</i>. Niceties like UTF-8 weren't even dreamt of yet.<p>Literally every application interpreted their config files differently, generated output logs differently, and spoke "text" over the network or the pipeline differently.<p>If you need to read, write, send, or receive <i>N</i> different text formats, you needed at least <i>N</i> parsers and <i>N</i> serializers.<p>Those parsers and serializers didn't exist.<p>They just didn't. The formats were not formally specified, they were just "whatever some program does"... "on some machine". Yup. They output <i>different text encodings</i> on different machines. Or the same machine even! Seriously, if two users had different regional options, they might not be able to share files generated by the same application on the same box.<p>Basically, you either had a programming "library" available so that you could completely sidestep the issue and avoid the text, or you'd have to write your own parser, personally, by hand. I <i>loooved</i> the early versions of ANTLR because they made this at least tolerable. Either way, good luck handling all the corner-cases of escaping control characters inside a quoted string that also supports macro escapes, embedded sub-expressions, or whatever. Fun times.<p>Then XML came along.<p>It precisely specified the syntax, and there were off-the-shelf parsers and generators for it in multiple programming languages! You could generate an XML file on one platform and read it in a different language on another by including a standardised library that you could just <i>download</i> instead of typing in a parser by hand like an animal. It even specified the text encoding so you wouldn't have to guess.<p>It was glorious.<p>Microsoft especially embraced it and to this day you can see a lot of that history in Visual Studio project files, ASP.NET web config files, and the like.<p>The reason JSON slowly overtook XML is many-fold, but the key reason is simple: It was easier to parse JSON into JavaScript objects in the browser, and the browser was taking off as an application developer platform exponentially. JavaScript programmers outnumbered everyone else combined.<p>Notably, the early versions of JSON were typically read using just the "eval()" function.[2] It wasn't an encoding per-se, but just a subset of JavaScript. Compared to having to have an XML parser in JavaScript, it was very lightweight. In fact, <i>zero</i> weight, because if JavaScript was available, then by definition, JSON was available.<p>The timeline is important here. An in-browser XML parser was available before JSON was a thing, but only for IE 5 on Windows. JSON was invented in 2001, and XMLHttpRequest become consistently available in other browsers after 2005 and was only a standard in 2006. Truly universal adoption took a few more years after that.<p>XML was only "complex" because it's not an object-notation like JSON is. It's a <i>document markup language</i>, much like HTML. Both trace their roots back to SGML, which dates back to 1986. These types of languages were used in places like Boeing for records keeping, such as tracking complex structured and semi-structured information about aircraft parts over decades. That kind of problem has an essential complexity that can't be wished away.<p>JSON is simpler for <i>data exchange</i> because it maps nicely to how object oriented languages store pure data, but it can't be readily used to represent human-readable documents the way XML can.<p>The other simplification was that JSON did away with schemas and the like, and was commonly used with dynamic languages. Developers got into the habit of reading JSON by shoving it into an object, and then interpreting it directly without any kind of parsing or decoding layer. This works kinda-sorta in languages like Python or JavaScript, but is horrific when used at scale.<p>I'm a developer used to simply clicking a button in Visual Studio to have it instantly bulk-generate entire API client libraries from a WSDL XML API schema, documentation and all. So when I hear REST people talk about how much simpler JSON is, I have <i>no idea what they're talking about</i>.<p>So now, slowly, the wheel is being reinvented to avoid the manual labour of RETS and return to machine automation we had with WS-*. There are JSON API schemas (multiple!), written in JSON (of course), so documentation can't be expressed in-line (because JSON is not a markup language). I'm seeing declarative languages like workflow engines and API management expression written in JSON gibberish now, same as we did with XML twenty years ago.<p>Mark my words, it's just a matter of time until someone invents JSON namespaces...<p>[1] Most of the older Linux applications still do, which makes it ever so much fun to robustly <i>modify</i> config files programatically.<p>[2] Sure, these days JSON is "parsed" even by browsers instead of sent to eval(), for security reasons, but that's not how things started out.