To be honest I kind of find myself drifting away from gRPC/protobuf in my recent projects. I love the idea of an IDL for describing APIs and a great compiler/codegen (protoc) but there's just soo many idiosyncrasies baked into gRPC at this point that it often doesn't feel worth it IMO.<p>Been increasingly using LSP style JSON-RPC 2.0, sure it's got it's quirks and is far from the most wire/marshaling efficient approach but JSON codecs are ubiquitous and JSON-RPC is trivial to implement. In-fact I recently even wrote a stack allocated, server implementation for microcontrollers in Rust <a href="https://github.com/OpenPSG/embedded-jsonrpc">https://github.com/OpenPSG/embedded-jsonrpc</a>.<p>Varlink (<a href="https://varlink.org/" rel="nofollow">https://varlink.org/</a>) is another interesting approach, there's reasons why they didn't implement the full JSON-RPC spec but their IDL is pretty interesting.
The opaque API brings some niceties that other languages have, specifically about initialization. The Java impl for protobuf will never generate a NullPointerException, as calling `get` on a field would just return the default instance of that field.<p>The Go OpenAPI did not do this. For many primative types, it was fine. But for protobuf maps, you had to check if the map had been initialized yet in Go code before accessing it. Meaning, with the Opaque API, you can start just adding items to a proto map in Go code without thinking about initialization. (as the Opaque impl will init the map for you).<p>This is honestly something I wish Go itself would do. Allowing for nil maps in Go is such a footgun.
It's interesting, to everyone but but the mega shops like Google, protobuf is a schema declaration tool. To the megashops its a performance tool.<p>For most of my projects, I use a web-framework I built on protobuf over the years but slowly got rid of a lot of the protobufy bits (besides the type + method declarations) and just switched to JSON as the wire format. http2, trailing headers, gigantic multi-MB files of getters, setters and embedded binary representations of the schemas, weird import behaviors, no wire error types, etc were too annoying.<p>Almost every project I've tracked that tries to solve the declarative schema problem seems to slowly die. Its a tough problem an opinionated one (what to do with enums? sum types? defaults? etc). Anyone know of any good ones that are chugging along? OpenAPI is too resty and JSONSchema doesn't seem to care about RPC.
Protobuf 3 was bending over backwards to try to make the Go API make sense, but in the process it screwed up the API for C++, with many compromises. Then they changed course and made presence explicit again in proto 3.1. Now they are saying Go gets a C++-like API.<p>What I'd like is to rewind the time machine and undo all the path-dependent brain damage.
> syntax = "proto2" uses explicit presence by default<p>> syntax = "proto3" used implicit presence by default (where cases 2 and 3 cannot be distinguished and are both represented by an empty string), but was later extended to allow opting into explicit presence with the optional keyword<p>> edition = "2023", the successor to both proto2 and proto3, uses explicit presence by default<p>The root of the problem seems to be go's zero-values. It's like putting makeup on a pig, your get rid of null-panics, but the null-ish values are still everywhere, you just have bad data creeping into every last corner of your code. There is no amount of validation that can fix the lack of decoding errors. And it's not runtime errors instead of compile-time errors, which can be kept in check with unit tests to some degree. It's just bad data and defaulting to carry on no matter what, like PHP back in the day.
I hate this API and Go's handling of protocol buffers in general. Especially preparing test data for it makes for some of the most cumbersome and unwieldy files that you will ever come across. Combined with table driven testing you have thousands upon thousands of lines of data with an unbelievably long identifiers that can't be inferred (e.g. in array literals) that is usually copy pasted around and slightly changed. Updating and understanding all of that is a nightmare and if you miss a coma or a brace somewhere, the compiler isn't smart enough to point you to where so you get lines upon lines of syntax errors. But, being opaque has some advantages for sure.
> version: 2, 3, 2023 (released in 2024)<p>I call this Battlefield versioning, after the Battlefield video game series [1]. I bet the next version will be proto V.<p>[1]: in order: 1942, 2, 2142, 3, 4, 1, V, 2042
I recently used code-gen'd protobuf deser objects as the value type for an in-memory db and was considering flattening them into a more memory-efficient representation and using bitfields. That was for java though, not sure if they are doing the same thing there<p>Glad to see this change, for that use case it would've been perfect
Surprisingly I saw this on the front page mere minutes after deciding to use protobufs in my new project.<p>Currently I'm not quite sold on RPC since the performance benefits seem to show up on a much larger scale than what I am aiming for, so I'm using a proto schema to define my types and using protoc codegen to generate only JSON marshaling/unmarshaling + types for my golang backed and typescript frontend, with JSON transferred between the two using REST endpoints.<p>Seems to give me good typesafety along with 0 headache in serializing/deserializing after transport.<p>One thing I also wanted to do was generate SQL schemas from my proto definitions or SQL migrations but haven't found a tool to do so yet, might end up making one.<p>Would love to know if any HN folk have ideas/critique regarding this approach.
Oh, this is great. I just did an implementation in gRPC in Go whereby I had to churn through 10MB/s of data. I could not implement any kind of memory pool and thus I had a lot of memory allocation issues which lead to bad memory usage and garbage collection eating up my CPU.
why is code generation under-utilized? protobufs and other go tooling are great for code generation. Yet in practice i see few teams using it at scale.<p>Lots of teams creating rest / json APIs, but very few who use code generation to provide compile-time protection.
Great, now there's an API per struct/message to learn and communicate throughout the codebase, with all the getters and setters.<p>A given struct is probably faster for protobuf parsing in the new layout, but the complexity of the code probably increases, and I can see this complexity easily negating these gains.
I can’t wait to try this new Protobuf Enterprise Edition, with its sea of getters and setters ad nauseam. /s<p>However I can get behind it for the lazy decoding which seems nice, though I doubt its actual usefulness for serious software (tm). As someone else already mentioned, an actual serious api (tm) will have business-scope types to uncouple the api definition from the implementation. And that’s how you keep sane as soon as you have to support multiple versions of the api.<p>Also, a lot of the benefits mentioned for footgun reductions smell like workarounds for the language shortcomings. Memory address comparisons, accidental pointer sharing and mutability, enums, optional handling, etc are already solved problems and where something like rust shines. (Disclaimer: I run several grpc apis written in rust in prod)
BTW, if you care so much about performance, then fix the freaking array representation. It should be simple `[]SomeStruct` instead of `[]*SomeStruct`.<p>This one small change can result in an order of magnitude improvement.
Graphql won the race for me. Grpc is no longer relevant. Too many hurdles, no proper to and from Web support. You have to use some 3rd party non free service.