TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Why does gRPC insist on trailers?

324 pointsby strzalekalmost 3 years ago

25 comments

twissalmost 3 years ago
&gt; Whether it’s because I was wrong, or failed to make the argument [for HTTP trailers support], I strongly suspect organizational boundaries had a substantial effect. The Area Tech Leads of Cloud also failed to convince their peers in Chrome, and as a result, trailers were ripped out [from the WHATWG fetch specification].<p>FWIW, I personally think it&#x27;s a good thing that other teams within Google don&#x27;t have too much of an &quot;advantage&quot; for getting features into Chrome, compared to other web developers, <i>however</i>, I also think it&#x27;s very unfortunate that a single Chrome engineer gets to decide not only that it shouldn&#x27;t be implemented in Chromium, but that that also has the effect of it being removed from the specification. (The linked issue [1] was also opened by a Google employee.)<p>Of course, you might reasonably argue that, without consensus among the browsers to implement a feature, having it in the spec is useless. But nevertheless, with Chromium being an open source project, I think it would be better if it had a more democratic process of deciding which features should be supported (without, of course, requiring Google specifically to implement them, but also without, ideally, giving Google the power to veto them).<p>[1]: <a href="https:&#x2F;&#x2F;github.com&#x2F;whatwg&#x2F;fetch&#x2F;issues&#x2F;772" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;whatwg&#x2F;fetch&#x2F;issues&#x2F;772</a>
评论 #32381734 未加载
评论 #32384312 未加载
评论 #32381762 未加载
joe_guyalmost 3 years ago
I had never heard of HTTP trailers. So FYI<p>&gt; The Trailer response header allows the sender to include additional fields at the end of chunked messages in order to supply metadata that might be dynamically generated while the message body is sent, such as a message integrity check, digital signature, or post-processing status.<p><a href="https:&#x2F;&#x2F;developer.mozilla.org&#x2F;en-US&#x2F;docs&#x2F;Web&#x2F;HTTP&#x2F;Headers&#x2F;Trailer" rel="nofollow">https:&#x2F;&#x2F;developer.mozilla.org&#x2F;en-US&#x2F;docs&#x2F;Web&#x2F;HTTP&#x2F;Headers&#x2F;Tr...</a>
评论 #32384129 未加载
评论 #32381248 未加载
thameralmost 3 years ago
A few years ago I worked on a service that had to stream data out using protobuf messages, in a single request that could potentially transfer several gigabytes of data. At the HTTP level it was chunked, but above that I used a protobuf message that contained data plus a checksum of that data, with the last message of the stream containing no data but a checksum of the entire dataset (a flag was included to differentiate between the message types).<p>This simple design led us to find several bugs in clients of this API (e.g. messages dropped or processed twice), and gave us a way to avoid some of the issues mentioned in this article. Even if you don&#x27;t use HTTP trailers, you can still use them one layer above and benefit from similar guarantees.
评论 #32384947 未加载
remramalmost 3 years ago
&gt; As an aside, HTTP&#x2F;2 is technically superior to WebSockets. HTTP&#x2F;2 keeps the semantics of the web, while WS does not.<p>WTF is this? Those are different layer protocols. WebSocket can run on top of HTTP&#x2F;2.<p>It&#x27;s like saying TLS is technically superior to TCP, or IP is superior to copper cables.<p>Reference: <a href="https:&#x2F;&#x2F;www.rfc-editor.org&#x2F;rfc&#x2F;rfc8441.html" rel="nofollow">https:&#x2F;&#x2F;www.rfc-editor.org&#x2F;rfc&#x2F;rfc8441.html</a>
评论 #32381622 未加载
评论 #32381343 未加载
评论 #32381590 未加载
评论 #32381029 未加载
评论 #32388325 未加载
评论 #32381036 未加载
评论 #32381427 未加载
评论 #32381230 未加载
chucky_zalmost 3 years ago
From my perspective, I think the biggest issue with gRPC is it using HTTP&#x2F;2. I understand that there’s a lot of reasons to say “No, HTTP&#x2F;2 is far superior to HTTP&#x2F;1.1.” However, in terms of proxying _outside Google_ HTTP&#x2F;2 has lagged, and continues to lags at the L7 proxy layer. I recently performed a lot of high-throughput proxying comparing HAProxy, Traefik, and Envoy. HTTP&#x2F;1.1 outperformed HTTP&#x2F;2 (even H2C) by a pretty fair margin. Enough that if gRPC used HTTP&#x2F;1.1 we could use noticeably less hardware. I could see this holding true even with a service mesh.
评论 #32382147 未加载
评论 #32384751 未加载
评论 #32382260 未加载
评论 #32381517 未加载
Matthias247almost 3 years ago
&gt; In this flow, what was the length of the &#x2F;data resource? Since we don’t have a Content-Length, we are not sure the entire response came back. If the connection was closed, does it mean it succeeded or failed? We aren’t sure.<p>I don’t get that argument. GRPC uses length prefixed protobuf messages. It is obvious for the peer if a complete message (inside a stream or single response) is received - with and without trailers.<p>The only thing that trailer support adds is the ability to send an additional late response code. That could have been added also without trailers. Just put another length prefixed block inside the body stream, and add a flag before that differentiates trailers from a message. Essentially protobuf (application messages) in protobuf (definition of the response body stream).<p>I assume someone thought trailers would be a neat thing that is already part of the spec and can do the job. But the bet didn’t work out since browsers and most other HTTP libraries didn’t found them worthwhile enough to fully support.
评论 #32381773 未加载
评论 #32382097 未加载
alexcpnalmost 3 years ago
I use GRPC between micro services instead of REST and for that it is really great; All the deficiencies of REST - Non versioned, no typed goes away with GRPC and the protobuf is the official interface for all micro-services. No problems with this approach for over two years now; and also multi language support - We have Go and Java and Python and TypeScript micro-services now happily talking and getting new features and new interface methods updated. Maybe it was demise in the web space; but a jewel in micro-service space
评论 #32383159 未加载
评论 #32383705 未加载
rswailalmost 3 years ago
Personal opinion: RPC is a failed architectural style, independent of what serialization&#x2F;marshalling of arguments is used. it failed with CORBA, it failed with ONC-RPC, it failed with Java RMI.<p>Remote Procedure Calls attempt to abstract away the networked nature of the function and make it &quot;look like&quot; a local function call. That&#x27;s Just Wrong. When two networked services are communicating, the network <i>must</i> be considered.<p>REST relies on the media type, links and the limited verb set to define the resource and the state transfer operations to change the state of the resource.<p>HTTP explicitly incorporates the networked nature of the server&#x2F;client relationship, independent of, and irrespective of, the underlying server or client implementation.<p>Media types, separated from the HTTP networking, define the format and serialization of the resource representation independent of the network.<p>HTTP&#x2F;REST doesn&#x27;t really support streaming.
评论 #32383249 未加载
评论 #32392358 未加载
jiggawattsalmost 3 years ago
Something that’s always bugged me about streaming protocols of this type is that they prevent processing pipelining.<p>If trailers are used for things such as checksums, then the client must wait patiently for potentially <i>gigabytes</i> of data to stream to it before it can verify the data integrity and start processing it safely.<p>If the data is sent chunked, then this is not an issue. The client can start decoding chunks as they arrive, each one with a separate checksum.
评论 #32384434 未加载
评论 #32384332 未加载
throwaway29303almost 3 years ago
Interesting read.<p><pre><code> As an aside, HTTP&#x2F;2 is technically superior to WebSockets. HTTP&#x2F;2 keeps the semantics of the web, while WS does not. Additionally, WebSockets suffers from the same head-of-line blocking problem HTTP&#x2F;1.1 does. </code></pre> Not really a fair comparison. WebSockets is essentially a bidirectional stream of bytes without any verbs[0] or anything fancy. WebSockets is more like a fancy CONNECT.<p>And speaking of bidirectional stream of bytes... HTTP&#x2F;2 also suffers from head-of-the-line blocking as well, it uses TCP as its substrate, after all. QUIC, however, despite sharing some ideas from TCP, it <i>seems</i> to ameliorate this by resorting to multipaths[1]. It remains to be seen if indeed this is going to be beneficial however.<p>[0] - unless you count its opcode field as something similar to HTTP verbs but if so it&#x27;d resemble more TCP than HTTP, I think<p>[1] - <a href="https:&#x2F;&#x2F;datatracker.ietf.org&#x2F;doc&#x2F;html&#x2F;draft-ietf-quic-multipath-02" rel="nofollow">https:&#x2F;&#x2F;datatracker.ietf.org&#x2F;doc&#x2F;html&#x2F;draft-ietf-quic-multip...</a>
mdrileyalmost 3 years ago
It seems like a lot of other technologies in this space have solved the listed problems while remaining compatible with browsers, load balancers, reverse proxies, etc.<p>It was a <i>product choice</i> not to offer a fallback path when HTTP&#x2F;2 was unavailable. That choice made gRPC impossible to deploy in a lot of real-world environments.<p>What motivated that choice?
评论 #32383780 未加载
game-of-throwsalmost 3 years ago
&gt; Why Do We Need Trailers At All?<p>The author convinced they&#x27;re needed. But I wonder if some sort of error signaling should have been baked into `Transfer-Encoding: chunked` instead. It wouldn&#x27;t have made sense in HTTP&#x2F;1.1 since you can just close the connection. But in later HTTP versions with pipelined requests, I can see the use for bailing on one request while keeping the rest alive.
评论 #32381463 未加载
评论 #32381644 未加载
AceJohnny2almost 3 years ago
Offtopic, but:<p>&gt; <i>However, Google is not one single company, but a collection of independent and distrusting companies.</i><p>This is an important thing to keep in mind when considering the behavior of <i>any</i> large company.
评论 #32382062 未加载
criticaltinkeralmost 3 years ago
Relevant post from a few days ago:<p>Connect-Web: TypeScript library for calling RPC servers from web browsers<p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=32345670" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=32345670</a><p>I’m curious if anyone knows how Google internally works around the lack of support for gRPC in the browser? Perhaps gRPC is not used for public APIs?<p>The lack of browser support in the protobuf and gRPC ecosystem was quite surprising and one of the biggest drawbacks noted by my team while evaluating various solutions.
评论 #32381367 未加载
评论 #32381226 未加载
评论 #32381267 未加载
wmfalmost 3 years ago
Is it still the case that Google Chrome can&#x27;t support Google gRPC?
评论 #32382662 未加载
评论 #32381234 未加载
wonnagealmost 3 years ago
Trailers would be theoretically useful in a variety of HTML streaming-related cases if they actually had widespread support (but they don&#x27;t):<p>- sending down Server-Timing values for processing done after the headers are sent - updating the response status or redirecting after the headers are sent - deciding whether a response is cacheable <i>after</i> you&#x27;ve finished generating it<p>All of these except the first one obviously break assumptions about HTTP and I&#x27;m not surprised they&#x27;re unsupported. Firefox [1] actually supports the first case. The rest have workarounds, you can do a meta-refresh or a JS redirect, and you could simply not stream cacheable pages (assuming they&#x27;d generally be served from cache anyway).<p>But it&#x27;s still the case that frontend code generally likes to throw errors and trigger redirects in the course of rendering, rather than performing all that validation up front. That&#x27;s sensible when you&#x27;re rendering in a browser, but makes it hard to stream stuff with meaningful status codes.
kiribertyalmost 3 years ago
Great article. I really like the points in &quot;Lessons for Designers&quot; section. Applicable for software engineering in general as well.
akshayshahalmost 3 years ago
The author also posted an interesting Twitter thread a few months ago [0], on the day my coworkers and I posted here about our gRPC-compatible RPC framework [1]. I was a bit afraid to read this post, but I shouldn&#x27;t have been - the author&#x27;s a class act, and he never called us out explicitly. There&#x27;s not much written about what the gRPC team was _thinking_ when they wrote up the protocol, and this was a nice window into how contemporaneous changes to HTTP and the fetch API shaped their approach. Given my current work, the final section (&quot;Lessons for Designers&quot;) really hit home.<p>That said, I didn&#x27;t follow the central argument - that you need HTTP trailers to detect incomplete protobuf messages. What&#x27;s not mentioned in the blog post is that gRPC wraps every protobuf message in a 5-byte envelope, and the bulk of the envelope is devoted to specifying the length of the enclosed message. It&#x27;s easy to detect prematurely terminated messages, because they don&#x27;t contain the promised number of bytes. The author says, &quot;[i]t’s not hard to imagine that trailers would be less of an issue, if the default encoding was JSON,&quot; because JSON objects are explicitly terminated by a closing } - but it seems to me that envelopes solve that problem neatly.<p>With incomplete message detection handled, we&#x27;re left looking for some mechanism to detect streams that prematurely terminate at a message boundary. (This is more likely than you might expect, since servers often crash at message boundaries.) In practice, gRPC implementations already buffer responses to unary RPCs. It&#x27;s therefore easy to use the standard HTTP Content-Length header for unary responses. This covers the vast majority of RPCs with a simple, uncontroversial approach. Streaming responses do need some trailer-like mechanism, but not to detect premature termination - as long as we&#x27;re restricting ourselves to HTTP&#x2F;2, cleanly terminated streams always end with a frame with the end of stream bit set. Streaming does need some trailer-like mechanism to send the details of any errors that occur mid-stream, but there&#x27;s no need to use HTTP trailers. As the author hints, there&#x27;s some unused space in the message envelope - we can use one bit to flag the last message in the stream and use it for the end-of-stream metadata. This is, more or less, what the gRPC-Web protocol is. (Admittedly, it&#x27;s probably a bad idea to rely on _every_ HTTP server and proxy on the internet handling premature termination correctly. We need some sort of trailer-like construct anyways, and the fact that it also improves robustness is a nice extra benefit.)<p>So from the outside, it doesn&#x27;t seem like trailers improve the robustness of most RPCs. Instead, it seems like the gRPC protocol prioritizes some abstract notion of cleanliness over simplicity in practice: by using the same wire protocol for unary and streaming RPCs, everyday request-response workloads take on all the complexity of streaming. Even for streaming responses, the practical difficulties of working with HTTP trailers have also been apparent for years; I&#x27;m shocked that more of the gRPC ecosystem hasn&#x27;t followed .NET&#x27;s lead and integrated gRPC-Web support into servers. (If I had to guess, it&#x27;s difficult because many of Google&#x27;s gRPC implementations include their own HTTP&#x2F;2 transport - adding HTTP&#x2F;1.1 support is a tremendous expansion in scope. Presumably the same applies to HTTP&#x2F;3, once it&#x27;s finalized.)<p>Again, though, I appreciated the inside look into the gRPC team&#x27;s thinking. It takes courage to discuss the imperfections of your own work, especially when your former coworkers are still supporting the project. gRPC is far from perfect, but the engineers working on it are clearly skilled, experienced, and generally decent people. Hats off to the author - personally, I hope to someday write code influential enough that a retrospective makes the front page of HN :)<p>0: <a href="https:&#x2F;&#x2F;twitter.com&#x2F;CarlMastrangelo&#x2F;status&#x2F;1532256576274243584" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;CarlMastrangelo&#x2F;status&#x2F;15322565762742435...</a><p>1: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=31584555" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=31584555</a>
评论 #32383575 未加载
lakomenalmost 3 years ago
To this day it&#x27;s still not clear to me, as even if asked on their github issues there is no definite answer,<p>Can one use nginx in front of a grpc serving backend if the client is a JS client in the broadest sense?<p>This unanswered question is the main reason I&#x27;m still doing RESTful JSON.
AtNightWeCodealmost 3 years ago
Very nice post. Http2 did not solve the TCP HOL problems though. Not sure about the WS statement. On the other hand. Vanilla WS has never ended up in prod on any of my projects even if it has been implemented several times.
jenia2022almost 3 years ago
I&#x27;m not getting it. Why is HTTP so inadequate for gRPC?<p>A service app for example can open 1000 sockets with a server and simply multiplex that way.
jeffbeealmost 3 years ago
Author doesn&#x27;t support the case for grpc being a &quot;failure&quot;. I wonder by what measure. It&#x27;s certainly pretty popular.
评论 #32381161 未加载
评论 #32383085 未加载
评论 #32381175 未加载
kris-novaalmost 3 years ago
gRPC: protobuf and stubby for performance reasons, we’ve spared no expense.
fijiaaronealmost 3 years ago
Application layer encoding should not interfere in the protocol transport layer.
评论 #32381605 未加载
jxialmost 3 years ago
I was so excited for gRPC when it came out because it meant having strongly typed APIs and auto-generated clients, but two things made it horrible to use: requiring http&#x2F;2 (so you couldn’t use most load balancers at the time) and the generated clients were unpleasant to use (you couldn’t just return an object to serialize, you had to conform to their streaming model).
评论 #32386488 未加载