TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: Has any small dev team successfully deployed a complex app with docker?

3 点作者 eblanshey将近 7 年前
I&#x27;ve been pondering the implications of using Docker &#x2F; Kubernetes to deploy a fairly complex application to production. We are a small team of a few developers, and although I personally deal with deployment using scripts, I&#x27;m by no means a devops expert or sysadmin.<p>Whenever I look into Kubernetes and everything that goes with it (maintenance, monitoring, logging, etc), I feel like it requires a full-time devops engineer to create and manage all of it. Our team will soon undergo a major rewrite of our application and the decision should be made to use Docker or continue using deployment scripts with ansible.<p>Have any devs here successfully learned docker and kubernetes, deployed it in production, and not regretted the decision later? What benefits did you obtain? Any tips about it for a dev?

4 条评论

n42将近 7 年前
like another commenter said, start small. if your application(s) follow the twelve-factor app methodology[1], containerizing them should not be challenging. that is the first step.<p>we are in the middle of a somewhat long rollout of Kubernetes by one engineer (me). by far the most time has been spent on making our applications work as twelve-factor applications, but a lot of that work happened before I even touched a Dockerfile.<p>we&#x27;re at the point that our development team is currently using Kubernetes locally as their development environment (but not yet in production). while at times I have had moments of self doubt and questioned our decision and approach, there have been many moments where the benefits have been made clear as day. the engineers are happy with the flexibility and consistency that Docker containers have brought to development, but it has come with more operational complexity.<p>ultimately, you need to decide what your target scale is for your engineering organization in 3, 6, 12, 24 months.. we are in the beginning stages of a rapid growth phase for our engineering team, and developing complicated cross-cutting concerns for our products was becoming cost-prohibitive simply because our development environment was too complicated to setup, maintain and debug. containerizing it temporarily relieved that pain and bought us time to then focus on scaling up those changes to production when appropriate.<p>[1]: <a href="https:&#x2F;&#x2F;12factor.net&#x2F;" rel="nofollow">https:&#x2F;&#x2F;12factor.net&#x2F;</a>
imauld将近 7 年前
&gt; I feel like it requires a full-time devops engineer to create and manage all of it.<p>It does.<p>Kubernetes is a great piece of tech but it is pretty complicated and does add a fair amount of overhead on top of whatever operational concerns you application already has. If you don&#x27;t have anyone that knows how to build and manage a cluster going to production with it would be extremely risky IMO.<p>I would recommend trying GKE, Google&#x27;s managed k8s service, in staging&#x2F;dev before even considering it as a serious path forward. If you are married to AWS or just don&#x27;t want to use GCP then kops would be your best bet. I have friends working with EKS, AWS&#x27; managed k8s service, and it doesn&#x27;t sound anywhere near as ready as GKE or as flexible as doing it yourself, frankly it sounds like a real pain. I haven&#x27;t used k8s on Azure but I have heard that it&#x27;s pretty good.<p>I also don&#x27;t generally recommend deploying a new application as decomposed services either. Unless you have done this a bunch of time it will probably save you a bunch of time to just do it as a monolith and deploy it to standard cloud VM&#x27;s or on-prem servers. Also be aware that Docker and by extension k8s are not the best way to run stateful applications. It can be done but it is definitely more work to get a k8s based DB working the same way as a non-k8s DB in terms of data retention. I imagine a complex application will need some kind of data store so even if you go with k8s you may still end up with non-k8s instances for you data.<p>k8s is great but it&#x27;s overhead con easily outweigh it&#x27;s benefits if you don&#x27;t have a someone who can manage it. Start simple if you can and work from there.
atmosx将近 7 年前
&gt; I&#x27;ve been pondering the implications of using Docker &#x2F; Kubernetes to deploy a fairly complex application to production.<p>These systems are managed by dedicated <i>teams</i> most of the times. Kubernetes has many moving parts and debugging issues can be a nightmare.<p>Using docker is the way of the future. Docker merges development&#x2F;production environments, facilitates CI&#x2F;CD, simplifies deployments&#x2F;rollbacks&#x2F;etc., it even enforces <i>best practices</i> by separating layers (persistent vs ephemeral), so on and so forth.<p>You could deploy your application through ansible docker module on EC2 instances, droplets or what-have-you.<p>So the more subtle question is <i>why do you need an orchestrator?</i><p>Container orchestrators solve the problem of <i>density</i>. Say, my stack is made of services running in 128MB of RAM. I want to scale them quickly in and out on-demand.<p>If you don&#x27;t have a density problem, e.g. your application will need 2GB of RAM anyway, I would say go with docker &amp; EC2 autoscaling. Much easier to handle, you won&#x27;t have to debug weird network&#x2F;logging issues and all the problems that orchestrators bring along.<p>If you choose to go with an orchestrator, then for a small team, I would advice to take a look to Docker Swarm. Swarm comes with service discovery, load balancing, secrets &amp; config-handling build-in. That&#x27;s a big win for smaller teams. The learning curve is rather small, if you&#x27;re already using docker. What you will have to handle if you go with swarm is:<p><pre><code> - Cluster initialisation (if you chose to automate this part, you might not, but you&#x27;d better automate the rest) - Node-level autoscaling - Container auto-scaling - Dynamic routing (traefik or nginx + confd will solve this for you) - Security (same goes for k8s or any other orchestrator, security requires eye for detail &amp; experience) </code></pre> There are other minor issues (e.g. Swarm internal load balancer won&#x27;t forward the real IP of the request to the internal service, which can be a PITA in some cases. There are workarounds mind you), but all orchestrators have minor issues and limitations.<p>Another word of caution about orchestrators: Most teams, don&#x27;t need orchestration and don&#x27;t have use cases that simpler setups cannot solve. Simple is smart, simple is genius. Keep it simple, until you can&#x27;t keep <i>that</i> simple anymore.<p>Oh, don&#x27;t even think about adding a persistent layer inside the orchestrator! If a service uses a Redis for caching for example, could be deployed as a <i>stack</i> in a swarm cluster. If you need persistence that goes beyond the lifecycle of the container, keep them out :-)<p>Good luck!
dylanhassinger将近 7 年前
start small, dont prematurely optimize