I started reading the article with much interest... up until the bit about the Semantic Web. Then I felt things went downhill.<p>> One such effort was the Semantic Web. The dream was to create a Resource Description Framework (editorial note: run away from any team which seeks to create a framework), which would allow metadata about content to be universally expressed. For example, rather than creating a nice web page about my Corvette Stingray, I could make an RDF document describing its size, color, and the number of speeding tickets I had gotten while driving it.<p>> This is, of course, in no way a bad idea. But the format was XML based, and there was a big chicken-and-egg problem between having the entire world documented, and having the browsers do anything useful with that documentation.<p>The author completely falls short to describe the evolution of the SemWeb over the past 10 years. Tons of specs, several declarative languages and technologies have been grown to not just get beyond the verbosity of a serialization format such as XML, but also move away from the classic relational data model.<p>Turtle, JSON-LD, SPARQL, Neo4J, Linked Data Fragments,... come to mind. And then there are the emerging applications of linked data. If anything, the Federated Web is exactly about URLs and semantic web technologies based on linking and contextualizing data.<p>The entire premise of Tim Berner Lee's Solid/Inrupt is based on these standards including URI's.<p>Linked data and federation isn't just about challenging social media, it's also about creating knowledge graphs - such as wikidata.org - and creating opportunities for things such as open access and open science.<p>Then there's this:<p>> httpRange-14 sought to answer the fundamental question of what a URL is. Does a URL always refer to a document, or can it refer to anything? Can I have a URL which points to my car?<p>> They didn’t attempt to answer that question in any satisfying manner. Instead they focused on how and when we can use 303 redirects to point users from links which aren’t documents to ones which are, and when we can use URL fragments (the bit after the ‘#’) to point users to linked data.<p>Err. They did.<p>That's what the Resource Description Framework is all about. It gives you a few foundational building blocks for describing the world. Even more so, URI's have absolutely NOTHING to do with HTTP status codes. It just so happens that HTTP leverages URI's and creates a subset called HTTP URL's that allows the identification and dereference of webbased resources.<p>You can use URI's as globally unique identifiers in a database. You could use URN's to identify books. For instance urn:isbn:0451450523 is an identifier for the 1968 novel The Last Unicorn.<p>So, this is a false claim. I could forgive them for inadvertently not looking beyond URL's as a mechanism used within the context of HTTP communication.<p>> In the world of web applications, it can be a little odd to think of the basis for the web being the hyperlink. It is a method of linking one document to another, which was gradually augmented with styling, code execution, sessions, authentication, and ultimately became the social shared computing experience so many 70s researchers were trying (and failing) to create. Ultimately, the conclusion is just as true for any project or startup today as it was then: all that matters is adoption. If you can get people to use it, however slipshod it might be, they will help you craft it into what they need. The corollary is, of course, no one is using it, it doesn’t matter how technically sound it might be. There are countless tools which millions of hours of work went into which precisely no one uses today.<p>I'm not even sure what the conclusion is here. Did the 'hyperlink' fail? did the concept of a 'URI' fail? (both are different things!) Because neither failed, on the contrary!<p>Then there's this wonky comparison of the origin of the Web with a single project or a startup. The author did the entire research on the history of the URI but they still failed to see that the Internet and the Web were invented by committee and by coincidence. Pioneers all over the place had good ideas, some coalesced and succeeded, others didn't. Some were adapted to work together in a piece-meal fashion such as Basic Auth.<p>And that's totally normal. Organic growth and distribute development is the baseline. Yes, the Web as we know it today is the result of many competing voices, but at the same time it could only work if everyone ended up agreeing over the basics.<p>The fact of the matter is that some companies - looking at you FAANG - would rather have us all locked in a closed, black-box ecosystems, rather then having open standards around that allow for interoperability, and thus create opportunities for new threats to challenge their business interests.<p>I understand that the article is written by CloudFlare, a CDN company with its own interests. But I'm trying to wrap my ahead around how the author failed in addressing exactly future opportunities and threats, after this entire exposé.