TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Fly Kubernetes

272 点作者 ferriswil超过 1 年前

30 条评论

dangoodmanUT超过 1 年前
This is really exciting, but there are a few things they will certainly have to work through:<p>*Services:*<p>Kubernetes expects DNS records like {pod}.default.svc.cluster.local. In order to achieve this they will have to have some custom DNS records on the &quot;pod&quot; (fly machine) to resolve this with their metadata. Not impossible, but something that has to be take into account.<p>*StatefulSets:*<p>This has 2 major obstacles:<p>The first is dealing with disk. k8s expects that it can move disks to different logical pods when they lose them (e.g. mapping EBS to an EC2 node). The problem here is that fly has a fundamentally different model. It means that it either has to decide not to schedule a pod because it can&#x27;t get the machine that the disk lives on, or not guarantee that the disk is the same. While this does exist as a setting currently, the former is a serious issue.<p>The second major issue is again with DNS. StatefulSets have ordinal pod names (e.g. {ss-name}-{0..n}.default.sv.cluster.local). While this can be achieved with their machine metadata and custom DNS on the machine, it means that it either has to run a local DNS server to &quot;translate&quot; DNS records to the fly nomenclature, or have to constantly update local services on machines to tell them about new records. Both will incur some penalty.
benpacker超过 1 年前
Am I understanding correctly that because they map a “Pod” to a “Fly Machine”, there’s no intermediate “Node” concept?<p>If so, this is very attractive. When using GKS, we had to do a lot of work to get our Node utilization (the percent of resources we had reserve on a VM actually occupied by pods) to be higher than 50%.<p>Curios what happens when you run “kubectl get nodes” - does it lie to you, or call each region one Node?
评论 #38687091 未加载
评论 #38686326 未加载
评论 #38686960 未加载
评论 #38685886 未加载
评论 #38685967 未加载
评论 #38685985 未加载
评论 #38685968 未加载
corobo超过 1 年前
Is this still a limitation for Fly k8s?<p>&gt; A Fly Volume is a slice of an NVMe drive on the physical server your Fly App runs on. It’s tied to that hardware.<p>Does the k8s have any kind of storage provisioning that allows pods with persistent storage (e.g. databases) to just do their thing without me worrying about it or do I still need to handle disks potentially vanishing?<p>I think this is the only hold-up that stops me actually using Fly. I don&#x27;t know what happens if my machine crashes and is brought back on different hardware. Presumably the data is just not there anymore.<p>Is everyone else using an off-site DB like Planetscale? Or just hoping it&#x27;s an issue that never comes up, w&#x2F; backups just in case? Or maybe setting up full-scale DB clusters on Fly so it&#x27;s less of a potential issue? Or &#x27;other&#x27;?
评论 #38686637 未加载
评论 #38686714 未加载
asim超过 1 年前
And fly becomes the standard cloud provider like everyone else. I think this transition is only natural. It&#x27;s hard to be a big business without catering to the needs of larger companies and that is the operation of many services, not individual apps.
评论 #38686674 未加载
评论 #38685741 未加载
verdverm超过 1 年前
If they are reluctant and only do it because they have to, are they really the right vendor for managed k8s?<p>What about them makes for a good trade-off when considering the many other vendors?
评论 #38686011 未加载
评论 #38690482 未加载
评论 #38686872 未加载
评论 #38685808 未加载
评论 #38685960 未加载
motoboi超过 1 年前
There is a very high price to pay when going with your own scheduling solution: you have to compete with the resources google and others are throwing at the problem.<p>Also, there is the market for talent, which is non-existent for fly.io technology if it&#x27;s not open source (I see what you did here, Google): you&#x27;ll have to teach people how your solution works internally and congratulations, now you have a global pool of 20 (maybe 100) people that can improved it (if you have really deep pockets, maybe you can have 5 Phd). Damn, universities right now maybe have classes about Kubernetes for undergrad students. Will they teach your internal solution?<p>So, if a big part of your problem is already solved by a gigantic corporation investing millions to create a pool of talented people, you better take use of that!<p>Nice move, fly.io!
评论 #38697319 未加载
kuhsaft超过 1 年前
How does this handle multiple containers for a Pod? In a container runtime k8s, containers within a pod share the same network namespace (same localhost) and possibly pid namespace.<p>The press release maps pods to machines, but provides no mapping of pod containers to a Fly.io concept.<p>Are multiple containers allowed? Do they share the same network namespace? Is sharing PID namespace optional?<p>Having multiple containers per pod is a core functionality of Kubernetes.
评论 #38688328 未加载
评论 #38688596 未加载
thowrjasdf32432超过 1 年前
Great writeup! Love reading about orchestration, especially distributed.<p>&gt; When you create a cluster, we run K3s and the Virtual Kubelet on a single Fly Machine.<p>Why a single machine? Is it because this single fly machine is itself orchestrated by your control plane (Nomad)?<p>&gt; ...we built our own Rust-based TLS-terminating Anycast proxy (and designed a WireGuard&#x2F;IPv6-based private network system based on eBPF). But the ideas are the same.<p>very cool, is this similar to how Cilium works?
评论 #38687414 未加载
nathancahill超过 1 年前
Man, I just wish they&#x27;d work on stability. Fly.io is an amazing offering. But it&#x27;s so buggy, it&#x27;s almost more headache than it&#x27;s worth trying to build PaaS-flavored software on it. Even the Fly docs are &quot;buggy&quot; since they mostly transitioned to v2 Machines but the docs are still a mix of Nomad and Machines.<p>There&#x27;s so much power on the platform with Flycast, LiteFS and other clever ways to work with containers. If it was 90% stable I&#x27;d consider it a huge win.
评论 #38687280 未加载
评论 #38687540 未加载
edude03超过 1 年前
I&#x27;m confused about what this is actually offering (also very tired due to some flight problems; anyway)<p>To me, I&#x27;d imagine kubernetes on fly as running kind (kubernetes in docker) with fly converting the docker images to firecracker images OR &quot;normal&quot; kubernetes api server running on one machine then using CAPI&#x2F;or a homegrown thing for spinning up additional nodes as needed.<p>So, what&#x27;s the deal here? Why k3s + a virtual kublet?
评论 #38685986 未加载
评论 #38685679 未加载
siliconc0w超过 1 年前
Always look forward to reading the fly.io blog write-ups. As much as people hate it, K8s has become the defacto operating system for the cloud so it makes sense to support it.
0xbadcafebee超过 1 年前
I like the discussion on scheduling. One of the things I&#x27;ve thought recently is that, since there&#x27;s no one model of how an app or system should work, nor one network architecture, there shouldn&#x27;t be one scheduler.<p>Instead, I think the system components should expose themselves as independent entities, and grant other system components the ability to use them under criteria. With this model, any software which can use the system components&#x27; interfaces can request resources and use them, in whatever pattern they decide to.<p>But this requires a universal interface for each kind of component, loosely coupled. Each component then needs to have networking, logging, metrics, credentials, authn+z, configuration. And there needs to be a method by which users can configure all this &amp; start&#x2F;stop it. Basically it&#x27;s a distributed OS.<p>We need to make a standard for distributed OS components using a loosely coupled interface and all the attributes needed. So, not just a standard for logging, auth, creds, etc, but also a standard for networked storage objects that have all those other attributes.<p>When all that&#x27;s done, you could make an app on Fly.io, and then from GCP you could attach to your Fly.io app&#x27;s storage. Or from Fly.io, send logs to Azure Monitor Logs. As long as it&#x27;s a standard distributed OS component, you just attach to it and use it, and it&#x27;ll verify you over the standard auth, etc. Not over the &quot;Fly.io integration API for Log Export&quot;, but over the &quot;Distributed OS Logging Standard&quot; protocol.<p>We&#x27;ve got to get away from these one-off REST APIs and get back to real standards. I know corporations hate standards and love to make their own little one-offs, but it&#x27;s really holding back technological progress.
评论 #38686035 未加载
评论 #38688797 未加载
评论 #38688872 未加载
rileymichael超过 1 年前
Having little experience with k3s, how big of a workload (“nodes” aka virtual kubelets, pods, crds, etc) can you have before saturating the non-HA control plane becomes a concern?
figassis超过 1 年前
This looks interesting, but I run a bare metal k8s cluster over wire guard for independence. Not willing to rely on a nonstandard api&#x2F;platform. Current provider annoys me and I’m shutting down nodes the next day. Probably could not do that on FKS.
tootie超过 1 年前
This is impressive, but also seems to fly in the face of their raison d&#x27;etre. I don&#x27;t even bother with k8s on AWS because it&#x27;s too complex for even a mid-size operation. Isn&#x27;t the point of PaaS to obscure complexity?
评论 #38686432 未加载
评论 #38686162 未加载
Dowwie超过 1 年前
Wouldn&#x27;t it have cost less to enhance the Nomad scheduler rather than move to, and enhance, Kubernetes?<p>This aside, Fly is in a position to build its own alternative to K8s and Nomad from scratch, so maybe it will?
评论 #38686451 未加载
评论 #38686435 未加载
alpb超过 1 年前
I kind of miss the point of this. So if I&#x27;m reading this right, fly.io practically only exposes the Pods API, but Kubernetes is really much more than that. I&#x27;m not very familiar with any serious company that directly uses Pods API to launch containers, so if their reimplementation of Pods API is just a shim, and they&#x27;re not going to be able to implement ever-growing set of features in Kubernetes Pod lifecycle&#x2F;configuration (starting from &#x2F;logs, &#x2F;exec, &#x2F;proxy...) why even bother branding it Kubernetes? Instead they could do what Google does with Cloud Run (<a href="https:&#x2F;&#x2F;cloud.run&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;cloud.run&#x2F;</a>) which Fly.io is already doing?<p>I don&#x27;t know why would anyone would be like &quot;here&#x27;s a container execution platform, let me go ahead and use their fake Pods API instead of their official API&quot;.
评论 #38687335 未加载
评论 #38687474 未加载
gigapotential超过 1 年前
Nice!<p>Was there an internal project name for this? Fubernetes? f8s? :D
评论 #38688135 未加载
qdequelen超过 1 年前
Do you handle high throughput volumes? I would need this for testing to host a database service at scale.
4ggr0超过 1 年前
i definitely want to try this! never really worked with kubernetes, because it always seemed too complicated, for what i needed. after using fly.io for my first real web project in a while, they do seem to provide exactly what i want from a &quot;hoster&quot;.
Kostic超过 1 年前
Well, that&#x27;s a surprise. Glad to see that the team is flexible and willing to change. :)
imjonse超过 1 年前
Apples to oranges, but it has a similar vibe to when Deno added npm compat eventually.
znpy超过 1 年前
&gt; But, come on: you never took us too seriously about K8s, right?<p>What a strange way to admit they were wrong.
评论 #38686688 未加载
xgbi超过 1 年前
I have so many questions, it is a very good article!<p>My most important one is this: can I build a distributed k8s cluster with this?<p>I mean having fly machines in Europe, US and Asia acting as a solid k8s cluster and letting the kube scheduler do its job?<p>If yes then it is better than what the current cloud offerings are, with their region-based implementation.<p>My second question is obviously how is the storage handled when my workload migrates from the US to Europe: so I still profit from NVME speeds? Is it replicated synchronously?<p>Last but not least: does it support RWM semantics?<p>If all the answers are yes, kudos, you just solved many folk’s problems.<p>Stellar article, as usual.
k__超过 1 年前
Wen custom OS?
netshade超过 1 年前
I am a current Fly customer (personal and work), and have been happy with the service. Will likely be trying this out. That said, the marketing tone of this final part of the blog:<p>&gt; More to come! We’re itching to see just how many different ways this bet might pay off. Or: we’ll perish in flames! Either way, it’ll be fun to watch.<p>is like nails on the chalkboard for me.
评论 #38686246 未加载
评论 #38686393 未加载
评论 #38686259 未加载
评论 #38686371 未加载
评论 #38686355 未加载
hitpointdrew超过 1 年前
&gt; To keep things simple, we used Nomad, and instead of K8s CNIs, we built our own Rust-based TLS-terminating Anycast proxy (and designed a WireGuard&#x2F;IPv6-based private network system based on eBPF).<p>That is quite the opposite of “simple”. That is in fact, overly complex and engineered.
评论 #38685925 未加载
评论 #38685891 未加载
评论 #38685947 未加载
评论 #38685905 未加载
评论 #38686046 未加载
joshuamcginnis超过 1 年前
Why should one use kubernetes? Or rather, at what point of an apps growth cycle does k8s become appropriate?
评论 #38687157 未加载
评论 #38686855 未加载
评论 #38686979 未加载
评论 #38687934 未加载
评论 #38686719 未加载
thorawy7超过 1 年前
I ditched k8s and imported an eBPF library into my project. When certain conditions are met I fork logic, and scale back as needed. I haz a v8-like engine built into my project.<p>Not needing a bloated black box sysadmin framework (aside from Linux itself, which is plenty bloated and over engineered) is a huge time saver. And the eBPF libs have a lot of eyes on them.<p>IMO sysadmin and devops are done for. They lasted this long to “create jobs”.
syrusakbary超过 1 年前
This is one of the biggest footguns of a tech company I&#x27;ve seen in the last decade.<p>Time will tell if embracing the complexity of Kubernetes was a good play for them or not. But, in all honesty, I&#x27;m pretty sad to see this happening, although I&#x27;m sure they had their reasons.
评论 #38687159 未加载
评论 #38687066 未加载
评论 #38686921 未加载