I do wonder what the equivalent review of AWS Lambda would look like from an equivalent writer but one versed in GCP and K8S, encountering Lambda and IAM for the first time.<p>I think one can easily make concerning arguments about all the things Lambda abstracts away and how concerning that is.<p>HAVING SAID THAT, the substantial fundamental difference between something like Lambda and something like K8S is the value proposition. Lambda works backwards from your business logic. How to go from 0 to "working production code" with the minimal amount of steps, yet will still scale essentially infinitely (so long as you are willing to tolerate the latency costs)<p>K8S seems to me like it instead works backwards from your infrastructure. How to go from one set of working infrastructure to another set of working infrastructure, but one optimized for solving problems that your organization doesn't have yet until you're at the point where you're looking to migrate your whole cloud strategy to another provider because Azure gave you a 2% higher discount than AWS did.<p>Meanwhile, by the time you've built your first working K8S-based architecture, your competitor with Lambda is already in production serving customers.<p>I know, I know, it's not Apples and oranges. But K8S and Lambda are of the same "generation", both launching in 2014. Lambda was AWS's big bet of "Here is the amazing value you can get if you go all-in on AWS-native development", K8S was GCP's equivalent "Here's how you can design your cloud infra to be portable if you explicitly DON'T want to go all-in one cloud provider."<p>So while they offer almost diametrically opposite solutions (philosophically), they are pursuing the same developers to convince them of the mind share.<p>Me, I'll take the one that actually cares about getting me to production with working business logic code.
I love this quote:<p>> I worry a little bit about hiding the places where the networking happens, just like I worry about ORM hiding the SQL. Because you can’t ignore either networking or SQL.<p>The irony (or not irony, who even knows?), though, is that I always worried a little bit about EBS for similar reasons. It's gotten really, really good but it was a terrifying abstraction back in the day.
> for that, synchronous APIs quite likely aren’t what you want, event-driven and message-based asynchronous infrastructure would come into play. Which of course what I spent a lot of time working on recently. I wonder how that fits into the K8s/service-fabric landscape?<p>Unfortunately, unlike Envoy and networking, the story for Kubernetes event-driven architecture is not yet as stable. Generally it’s roll-your-own for reliability. Eventually Knative Eventing will serve that purpose, but it hasn’t hit 1.0[1] or anything more stable than that... the CRDs recently made it to beta, though and an initial go at error handling was added relatively recently (mid-to-late last year)[2]<p>1. <a href="https://github.com/knative/eventing/releases" rel="nofollow">https://github.com/knative/eventing/releases</a><p>2. <a href="https://github.com/knative/eventing/tree/master/docs/delivery" rel="nofollow">https://github.com/knative/eventing/tree/master/docs/deliver...</a>
We "use" traffic director at work. We thought we'd be able to use it for service discovery across our mesh, but it does way more than that in problematic ways that leak internal google abstractions :<<p>I asked one thing of the product team (roughly), "we like the bits where it tells envoy the nodes that are running the service, and what zones they are in. we don't like all the other weighting things you try and do to act like the google load balancer."<p>More in-depth we we've had traffic director direct traffic to instances within a specific zone by weighing that zone higher before instances in that zone would even start up, causing service degradation.<p>We considered writing our own XDS proxy which would run as a sidecar to envoy, connecting to traffic director and stripping all the weights.<p>After some back and forth with the TD team, we came up with a solution to instead fool it into not weighing things by setting the target CPU utilization of our backends to 1%...
<i>One thing made me gasp then laugh. Kelsey said “for the next step, you just have to put this in your Go imports, you don’t have to use it or anything:</i><p><pre><code> _ "google.golang.org/xds"
</code></pre>
<i>I was all “WTF how can that do anything?” but then a few minutes later he started wiring endpoint URIs into config files that began with xdi: and oh, of course. Still, is there, a bit of a code smell happening or is that just me?</i><p>It's not a code smell; it's idiomatic Go; the search he wants is [import for side effects].
> There seemed to be an implicit claim that client-side load balancing is a win, but I couldn’t quite parse the argument.<p>Having 1 hop for load balancing (usually on a server that most likely isn't on the same rack as your application) is worse than 0 hop for load balancing in your localhost. No Single point of failure and much lower chances of domino effect if a configuration goes awry.<p>> When would you choose this approach to wiring services together, as opposed to consciously building more or less everything as a service with an endpoint, in the AWS style?<p>I do not understand this. What's "wiring services together" vs "service with an endpoint"? They both are one and the same thing in context of gateway vs service mesh. Maybe you should read this - <a href="https://blog.christianposta.com/microservices/api-gateways-are-going-through-an-identity-crisis/" rel="nofollow">https://blog.christianposta.com/microservices/api-gateways-a...</a><p>> but as with most K8s demos, assumes that you’ve everything up and running and configured<p>Because that is what the whole "Cloud" thing is about. You don't pull out an RJ45 everytime you need to connect your server to your router, you just assume your cloud provider did it for you. You don't compile your own Kernel & OS to run your services, you just use a Linux Distro and get over it. Kubernetes is supposed to be a cloud offering, you are not supposed to configure and run it.<p>> One thing made me gasp then laugh. Kelsey said “for the next step, you just have to put this in your Go imports, you don’t have to use it or anything... I was all “WTF how can that do anything?” but then a few minutes later he started wiring endpoint URIs into config files that began with xdi: and oh, of course. Still, is there, a bit of a code smell happening or is that just me?<p>Because you weren't paying attention to the Speaker. He clearly mentions the drawbacks to sidecar as proxy model - additional latency (although the latency is much lower than that of a single Gateway architecture, but even 1ms could be disastrous for some applications). To cater to this high performance crowd they have envoy as an application library model, which is of course more difficult to adapt, but worth it when the added latency drops to nano seconds.
> It’s impressive, but as with most K8s demos, assumes that you’ve everything up and running and configured because if you didn’t it’d take a galaxy-brain expert like Kelsey a couple of hours (probably?) to pull that together and someone like me who’s mostly a K8s noob, who knows, but days probably.<p>> I dunno, I’m in a minority here but damn, is that stuff ever complicated. The number of moving parts you have to have configured just right to get “Hello world” happening is really super intimidating.<p>> But bear in mind it’s perfectly possible that someone coming into AWS for the first time would find the configuration work there equally scary.<p>I feel like I missed AWS and K8s... I've been on-metal for the last 6 years, and the 3 before that on virtual machines. My use limited to bucket storage (S3, GCP).<p>So this sounds like something I could experiment with: Take a simple and existing app and lift and shift a basic equivalent onto AWS, Azure and GCP in whatever is the most idiomatic way. Compare the learning curves and publish the code bases and config on github and see how much I missed (I just know IAM will be an issue due to the number of times I hear people complain about it).
I love when cloud experts (Tim, Kelsey) look at other clouds and go "damn, is that stuff ever complicated". Makes regular devs like myself feel a little better.
> Kelsey said “for the next step, you just have to put this in your Go imports, you don’t have to use it or anything:<p><pre><code> _ "google.golang.org/xds"
</code></pre>
> I was all “WTF how can that do anything?<p>very suspect language design from golang