> HTTP/1.1 is a delightfully simple protocol, if you ignore most of it.<p>As someone who had to write a couple of proxy servers, I can't express how so sadly accurate it is.
> This is not the same as HTTP pipelining, which I will not discuss, out of spite.<p>That is cause HTTP pipelining was and is a mistake and is responsible for a ton of http request smuggling vulnerabilities because the http 1.1 protocol has no framing.<p>No browser supports it anymore, thankfully.
> We're not done with our request payload yet! We sent:<p>> Host: neverssl.com<p>> This is actually a requirement for HTTP/1.1, and was one of its big selling points compared to, uh...<p>> AhAH! Drew yourself into a corner didn't you.<p>> ...Gopher? I guess?<p>I feel like the author must know this.. HTTP/1.0 supported but didn't require the Host header and thus HTTP/1.1 allowed consistent name-based virtual hosting on web servers.<p>I did appreciate the simple natures of the early protocols, although it is hard to argue against the many improvements in newer protocols. It was so easy to use nc to test SMTP and HTTP in particular.<p>I did enjoy the article's notes on the protocols however the huge sections of code snippets lost my attention midway.
That was an excellent, well-written, well-thought out, well presented, interesting, humorous, enjoyable read. Coincidentally I recently did a Rust crash course so it all made perfect sense - I am not an IT pro. Anyhows, thanks.
I learned HTTP1 pretty well but not much of 2.<p>Since playing with QUIC, I've lost all interest in learning HTTP/2, it feels like something already outdated that we're collectively going to skip over soon.
Amos' writing style is just so incredibly good. I don't know anyone else doing these very long-form, conversational style articles.<p>Plus, you know, just an awesome dev who knows his stuff. Huge fan.
What a great overall site. Hopping down the links I found the section on files with code examples in JS, Rust and C, plus strace, really the best short explanation I've ever found online.<p><a href="https://fasterthanli.me/series/reading-files-the-hard-way/part-1" rel="nofollow">https://fasterthanli.me/series/reading-files-the-hard-way/pa...</a>
This is awesome, didn't read all of it yet, but I will for sure, I use HTTP way too much and too often to ignore some of these underlying concepts, and when I try to look it up, there's always way too much abstraction and the claims aren't proven to me with a simple example, and this article is full of simple examples. Thanks Amos!
> Where every line ends with \r\n, also known as CRLF, for Carriage Return + Line Feed, that's right, HTTP is based on teletypes, which are just remote typewriters<p>Does it need to be pointed out that this is complete bullshit?
Is HTTP always the same protocol as HTTPS - given the same version - and ignoring the encryption from TLS?<p>Theoretically yes, but in practice?<p>I've done my share of nc testing even simpler protocols than HTTP/1.1<p>For some reason the migration to HTTPS scared me despite the security assurances. I could not see anything useful in wireshark anymore. I now had to trust one more layer of abstraction.
As far as i can tell the host header is pointless, because if it's ssl/tls you won't be able to read it and route it. That's what sni is for. If you aren't tls then you don't need it, unless you hit the server as an ip. But then why would you do that?
Also, never trust the content length. It's been that way since before http was finalized. Use it as guidance, but don't treat it as canonical.