Cloudflare doesn't obey the Accept header on anything other than image content types.<p>This means if you have an endpoint that returns HTML or JSON depending on the requested content type and you try to serve it from behind Cloudflare you risk serving cached JSON to HTML user agents or vice-versa.<p>I dropped the idea of supporting content negotiation from my Datasette project because of this.<p>(And because I personally don't like that kind of endpoint - I want to be able to know if a specific URL is going to return JSON or HTML).
I agree with the underlying point being made here: a separate API (REST/JSON) that serves data should be kept separate from the endpoints that serve an application, in this case, HTML-based.<p>It seems like building your application around a single API that is also used to provide data externally saves you time, but you end up polluting that API with presentation concerns needed to drive the application's reports/grids/views. It's not worth the mental energy to consider how changes you need to make for presentation might affect the 'purity' of your public API. Returning hypermedia from the 'internal' API just forces that separation: there's no expectation that this 'data' is being returned for consumption by anything except the app that uses it.
I think content negotiation is great when your usecase supports it, like asking for XML or JSON. Also the mappings of content types should be well defined for all to see and edit. Rails is kinda a labyrinth in that regard.<p>I do tend to prefer actual file extensions though. Friendlier for humans (curl <a href="https://endpoint/item.json" rel="nofollow noreferrer">https://endpoint/item.json</a> vs curl -H "accept: application/json" <a href="https://endpoint/item" rel="nofollow noreferrer">https://endpoint/item</a>), and it is visible in logging infrastructure, sharable, etc.
I really enjoy these blog posts from htmx.org. Makes me better at creating applications faster, because they challenge my conventional wisdom that slows me down without offering a compelling advantage.<p>Side note: I guess a better title for this blog post would be "<i>Don't share endpoints between machines and humans</i>". If the consumer of an endpoint is a human, it's probably a bad idea to make machines use the same endpoint. Content negotiation is fine, for example if I want to have an API return binary data instead of JSON (if, for example, I'm on an IoT device with sparse resources). This is fine because it's exactly the same API, with just the data format being different. And in both cases it's consumed by a machine/computer.<p>In the case of the frontend/HTML, the consumer is a human (and not a machine), which — as the article mentions — adds a bunch of constraints that are not applicable to machines: a need for pagination (because scrolling through a thousand results can be annoying), a need for ordering (because I want the most relevant results first), displaying related content (as the article explains). The machine API doesn't necessarily need any of these features.
So, this is actually about: Why I Tend Not To Use Content Negotiation (to serve both HTML and JSON data from the same endpoints).<p>The author also suggests:<p>> The alternative is to ... splitting your APIs. This means providing different paths (or sub-domains, or whatever) for your JSON API and your hypermedia (HTML) API.<p>I believe the alternative has been the norm actually. For example, many front-end frameworks encode UI states in URL, and it's not so sustainable to keep the alignment b/w UI states and data APIs in the long term.
Content negotiation is surely a fire composed of tires but there are sadly many scenarios where it is necessary. Deciding whether to redirect for example is often context sensitive.<p>In other words content negotiation is useful to be able to respond intelligibly. If a client asks you for json but not html, it might not make sense to return html.
A technology like htmx seems to demand a “hypermedia” API. Whereas things like angular and react can consume a data API in many cases. However once an application becomes sufficiently complex people end up building data APIs just to suit their frontend framework. And in that case doing htmx and returning html seems nicer.
Content negotiation is useful when there are multiple feasible return formats. But when is that the case?<p>For data, Json is the absolute king. For content, html is king. There is very little to negotiate.<p>The only case where I needed that feature was when data scientists wanted to download data from my API and needed a bunch of formats (parquet, CSV, TSV). But then they did not really grok content negotiation and asked for a query param. So finally I think this is like a lot of html features: half baked and from a different time. Html would do well to drop it.
> Your JSON API should remain stable. You can’t be adding and removing end-points willy-nilly. Yes, you can have some end-points respond with either JSON or HTML and others only respond with HTML, but it gets messy. What if you accidentally copy-and-paste in the wrong code somewhere, for example.<p>Can't we handle this situation by regarding each version of the JSON output as a separate content type (which is arguably the semantically correct thing anyway) and then letting the server pick the more recent output version that the client supports?
To me this just feels wasteful. Most APIs only ever return one type of thing. So the business of asking for application/json with every request and then the API confirming that, yes, it's still sending application/json;charset=utf-8 just seems a pointless waste of bandwidth. Same with API versioning. Most APIs are stuck at v1 forever. It never changes. It never gets verified. It gets hard coded all over the place. All for the option to, maybe introduce a v2 on the same server. Seems like a lot of premature optimization for not a whole lot of gain.
I get this on one level but wow can you tell the difference between when a team uses the API they offer users and when they don’t, and the “single API” approach gets you there so well.<p>I have basically never seen a nice user-facing API when it’s been split out. Sometimes that’s fine, but at least for enterprise use cases having a “real” API just feels like table stakes in so many domains for getting bigger clients onboard.
Tangential, but the lack of a "these types are available" as the counterpart to "Accept" has always been so obviously missing. What's the point of Accept if you don't know what to ask for?<p>Maybe in the trivial case saying "I'd prefer a JPG to a PNG" can be an assymetrical choice. But in all the interesting use cases I can think of, e.g. where there are competing representation formats, you'd want the server to be able to respond to a HEAD with the choices.<p>That's the kind of thing you can put in Swagger, but that might lead to hoisting the client's choice into the API, away from Content Negotiation.
> Data APIs should be rate limited<p>Hypermedia APIs should be rate limited as well, because otherwise people will just go and screen scrape (like many HN apps do, because HN doesn't offer an API). All a "data" API does is make the scraper's job easier.<p>> Data APIs typically use some sort of token-based authentication / Hypermedia APIs typically use some sort of session-cookie based authentication<p>So what. Any web framework worth its salt can support multiple authentication / credential mechanisms - the only "benefit" I can see from limiting cookie authentication is to make life for bad actors with cookie-stealer malware harder (like GitLab does, IIRC).
Isn't the name "Data Application Programming Interface" redundant ?<p>And "Hypermedia Application Programming Interface" wrong, because generally not for an application at all, but rather a (non-programming, as the author says, "for humans") interface to display multimedia documents ? (I guess you get (inevitable, if not necessarily good) feature creep as soon as you start including something like forms - see also : forms in pdfs ?)
Split it out but keep using the same models. Wouldn’t want your entities and semantics to drift apart between your UI and API.<p>Personally I prefer sticking to the standards. At least that way when you move between projects you know what you’re getting into.<p>But everyone has their own conventions these days. It’s all fragmenting.
At a very basic level, isn't it just confusing to have the same URL and method return two different completely different formats?<p>I feel like a good API is limited in what it accepts, and this alone is enough to say one should not do content negotiation unless forced to.
Carson Gross' main thesis is this: "I am advocating tightly coupling your web application to your hypermedia API". In other words "have an API that returns HTML snippets tightly coupled to your website."<p>It's ironic he wrote an article about how the industry uses the term "REST API" incorrectly, because he himself keeps using the term "API" incorrectly. If an "API" is tightly coupled to a single application, it's not an "Application Programming Interface"... it's just a part of your application.<p>An API is supposed to be an interface on top of which <i>multiple</i> applications may rest. Particularly without a specific frontend in mind - so web, desktop app, mobile app, as a component of other services and so on. Obviously if it serves site-specific HTML snippets, that's not the case. The only reason he advocates this whole thing is because without it HTMX won't work, and in this way I find it quite myopic as a position. But if I was pushing HTMX I'd also be compelled to figure out reasons to make it sound good.<p>So from that PoV, talking about "Content Negotiation in HTML APIs" loses meaning, as what he has is not an API in the first place, it's just his HTML website, but with some partial requests in the mix. And <i>of course</i> you wouldn't mix your API and your site. But this <i>does not</i> imply you can't and shouldn't use Content Negotiation either on your site, or in your API. You simply shouldn't use them to mix two things that never made sense to mix.<p>A lot of his blog posts would become completely unnecessary if he just says "don't mix your website and your API, and the HTMX partial requests are part of your website, not your API". Alas he's stuck on this odd formulation of "hypermedia API" and constantly having to clarify himself and making things as clear as mud.
>Not being content with alienating only the general purpose JSON API enthusiasts, let me now proceed to also alienate my erstwhile hypermedia enthusiast allies<p>Nah I think he's right and it's coherent to avoid HTTP subtleties in that web architecture.