TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: How do you keep track of releases/deployments of dozens micro-services?

108 点作者 vladholubiev超过 7 年前

24 条评论

dhinus超过 7 年前
Our apps are made by 5-15 (micro)services. I&#x27;m not sure if this approach would scale to hundreds of services managed by different teams.<p>We store the source code for all services in subfolders of the same monorepo (one repo &lt;-&gt; one app). Whenever a change in any service is merged to master, the CI rebuilds _all_ the services and pushes new Docker images to our Docker registry. Thanks to Docker layers, if the source code for a service hasn&#x27;t changed, the build for that service is super-quick, it just adds a new Docker tag to the _existing_ Docker image.<p>Then we use the Git commit hash to deploy _all_ services to the desired environment. Again, thanks to Docker layers, containers that haven&#x27;t changed from the previous tag are recreated instantly because they are cached.<p>From the CI you can check the latest commit hash that was deployed to any environment, and you can use that commit hash to reproduce that environment locally.<p>Things that I like:<p>- the Git commit hash is the single thing you need to know to describe a deployment, and it maps nicely to the state of the codebase at that Git commit.<p>Things that do not always work:<p>- if you don&#x27;t write the Dockerfile in the right way, you end up rebuilding services that haven&#x27;t changed --&gt; build time increases<p>- containers for services that haven&#x27;t changed get stopped and recreated --&gt; short unnecessary downtime, unless you do blue-green
评论 #16171994 未加载
评论 #16168785 未加载
评论 #16168617 未加载
评论 #16178385 未加载
评论 #16169589 未加载
评论 #16168174 未加载
alex_duf超过 7 年前
At the guardian we use <a href="https:&#x2F;&#x2F;github.com&#x2F;guardian&#x2F;riff-raff" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;guardian&#x2F;riff-raff</a><p>It takes a build from your build system (typically team city, but not exclusively) deploys it and record the deployment.<p>You can then check later what&#x27;s currently deployed, or what was deployed at some point in time in order to match it with logs etc.<p>Not sure how useable it would be outside of our company though.
perlgeek超过 7 年前
We have separate repos for each service, and use <a href="https:&#x2F;&#x2F;gocd.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;gocd.org&#x2F;</a> to build, test and deploy each separately. But, you could also configure it to only trigger builds from changes in certain directories. There is a single pipeline template from which all pipelines are instantiated.<p>Independent deployments are one of the key advantages of microservices. If you don&#x27;t use that feature, why use microservices at all? Just for scalability? Or because it was the default choice?
joshribakoff超过 7 年前
My experience with micro-services is code-bases that have prematurely adopted the pattern. Based on this, my advice is as follows...<p>You can deploy the whole platform and&#x2F;or refactor to a monolith, and maintain one change log which is simple.<p>That however has its own downsides, so you should find a balance. If you&#x27;re having trouble keeping track, perhaps re-organize. I read on one HN article that Amazon had 7k employees before they adopted microservices. The benefits have to outweigh the costs. Sometimes the solution to the problem is taking a step back. without more details its hard to say.<p>So basically one option is refactor [to a monolith] and re-evaluate the split such that you no longer have this problem. Just throw each repo in a sub-folder &amp; make that your new mono-repo &amp; go from there, it is worth an exploratory refactoring, but not a silver bullet.
评论 #16168043 未加载
vcool07超过 7 年前
Something called &#x27;integration testing&#x27; that has to be done before the final build which clearly flags off any compatibility issues between components.<p>Every component comes with a major&#x2F;minor release no., which tells about the nature of change that has gone in. For ex: Major rel is incremented for a change that usually introduces a new feature&#x2F;interface. Minor release no are reserved for bug fixes&#x2F;optimizations, that are more internal to the component.<p>The build manager can go through the list of all the delivered fixes and cherry pick the few which can go to the final build.
bootcat超过 7 年前
In the company i worked for, they had their own CI&#x2F;CD system which tracked information about each service and the systems onto which it has to deploy. Once it was all configured, it was basically button pushes. Also the system tracked feedback after deployment to confirm if the build went good or needed to be fixed - if certain parameters were unwell, basically it did a role back ! Also there were canary deployments to make sure code was deployed only to portion of systems to make sure it indeed pushed correctly and worked. If not, they are rolled back !
wballard超过 7 年前
We’ve been using our own setup for 4 years now. <a href="https:&#x2F;&#x2F;github.com&#x2F;wballard&#x2F;starphleet" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;wballard&#x2F;starphleet</a><p>We have 200 services, counting beta and live test variants. Most of the difficulties vanished once we had declarative versioned control of our service config in the ‘headquarters’ repository.<p>Not aware of anyone else using this approach.
whistlerbrk超过 7 年前
In the past I&#x27;ve used a single repo with all the code which gets pushed everywhere, and each service only runs it&#x27;s portion of the code. No guess work involved, but this may not work for a lot of setups of course. That and your graceful restart logic has to be slightly more involved.
twic超过 7 年前
At an old company, we wrote this, er &quot;model driven orchestration framework for continuous deployment&quot;:<p><a href="https:&#x2F;&#x2F;github.com&#x2F;tim-group&#x2F;orc" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;tim-group&#x2F;orc</a><p>Basically, there&#x27;s a Git repo with files in that specify the desired versions and states of your apps in each environment (the &quot;configuration management database&quot;).<p>The tool has a loops which converges an environment on what is written in the file. It thinks of an app instance as being on a particular version (old or new), started or stopped (up or down), and in or out of the load balancer pool, and knows which transitions are allowed, eg:<p><pre><code> (old, up, in) -&gt; (old, up, out) - ok (old, up, out) -&gt; (old, up, in) - no! don&#x27;t put the old version in the pool! (old, up, out) -&gt; (old, down, out) - ok (old, up, in) -&gt; (old, down, in) - no! don&#x27;t kill an app that&#x27;s in the pool! (old, down, out) -&gt; (new, down, out) - ok (old, up, out) -&gt; (new, up, out) - no! don&#x27;t upgrade an app while it&#x27;s running! </code></pre> Based on those rules, it plans a series of transitions from the current state to the desired state. You can model state space as a cube, where the three axes of space correspond to the three aspects of the state, vertices are states, and edges are transitions, some allowed, some not. Planning the transitions is then route-finding across the cube. When i realised this, i made a little origami cube to illustrate it, and started waving it at everyone. My colleagues thought i&#x27;d gone mad.<p>You need one non-cubic rule: there must be at least one instance in the load balancer at any time. In practice, you can just run the loop against each instance serially, so that you only ever bring down one at a time.<p>This process is safe, because if the tool dies, it can just start the loop again, look at the current state, and plan again. It&#x27;s also safe to run at any time - if the environment is in the desired state, it&#x27;s a no-op, and if it isn&#x27;t, it gets repaired.<p>To upgrade an environment, you just change what&#x27;s in the file, and run the loop.
underyx超过 7 年前
We wrote <a href="https:&#x2F;&#x2F;github.com&#x2F;kiwicom&#x2F;crane" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;kiwicom&#x2F;crane</a> which posts and updates a nicely formatted Slack message with the status of releases. It also posts release events to Datadog (in a version we&#x27;re publishing soon) and to an API that records them in a Postgres DB we keep for analytics queries.
drdrey超过 7 年前
<a href="https:&#x2F;&#x2F;www.spinnaker.io&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.spinnaker.io&#x2F;</a><p>Full disclosure: I&#x27;m on the Spinnaker team
mickeyben超过 7 年前
What do you mean by keep track? Do you want to be aware of deployments?<p>A Slack notification could do it. Or do you want to correlate deployments with other metrics?<p>In this case we instrument our deployments into our monitoring stack (influxdb&#x2F;grafana) and use this as annotations for the rest of our monitoring.<p>We can also graph the number of releases per project on different aggregates.
评论 #16171338 未加载
geocar超过 7 年前
Service discovery contains all the versions and who should be directed at what.<p>We also store stats in the service discovery app so versions can be promoted to &quot;production&quot; for a customer once the account management team has reviewed and updated their internal training.
_drFaust超过 7 年前
Got about 80+ services. One repo per service, each service has it&#x27;s own kubernetes yaml that details the services deploys to the cluster. K8s has a huge ecosystem for monitoring, versioning, health, autoscaling and discovery. On top of that, each repo has a separate slack channel that receives notifications for repo changes, comments, deployments, container builds, datadog monitoring events, etc. There are also core maintainers per repo to maintain consistency.<p>For anyone that has begun the microservice journey, kubernetes can be intimidating but way worth it. Our original microservice infrastructure was rolled way before k8s and it&#x27;s just night and day to work with now, the kubernetes team has thought of just about every edge case.
discordianfish超过 7 年前
Keep track as having a version controlled state of all revisions&#x2F;versions deployed? That&#x27;s something I would be interested in solutions to too, especially in a kubernetes environment with CI.<p>I could probably snapshot the kubernetes state to have an trail I can use to rollback to a point in time. Alternatively I thought about having CI updatemanifests in an integration repo and deploy from there, so that every change to the cluster is reflected by a commit in this repository.
chillydawg超过 7 年前
We built a small internal service that receives updates from the build &amp; deployment scripts we run which then presents us with a html page that shows what branch &amp; commit of everything is deployed (along with the branch and commit of every dependency) where, when and by who. It&#x27;s totally insecure so it can be trivially spoofed, but it&#x27;s our V1 for our fleet of golang services and it works well.
ukoki超过 7 年前
Have a CI&#x2F;CD pipeline that does the following:<p>- unit tests each service<p>- all services fan-in to a job that builds a giant tar file of source&#x2F;code artefacts. This includes a metadata file that lists service versions or commit hashes<p>- this &quot;candidate release&quot; is deployed to a staging environment for automated system&#x2F;acceptance testing<p>- it is then optionally deployed to prod once the acceptance tests have passed
评论 #16167360 未加载
char_pointer超过 7 年前
<a href="https:&#x2F;&#x2F;github.com&#x2F;ankyra&#x2F;escape" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;ankyra&#x2F;escape</a> (disclaimer: I&#x27;m one of the authors)<p>We use Escape to version and deploy our microservices across environments and even relate it to the underlying infrastructure code so we can deploy our whole platform as a single unit if needs be.
评论 #16170766 未加载
nhumrich超过 7 年前
We use gitlab CI for pipelines which is great. You can figure out when everything was deployed last etc. We even built our own dashboard using gitlab api that shows all the latest deploys, just so its easier to track down what was recently deployed if we are investigating issues.
评论 #16190756 未加载
ecesena超过 7 年前
Maybe I&#x27;m misunderstanding the question, but you may want to have a look at Envoy: <a href="https:&#x2F;&#x2F;www.envoyproxy.io" rel="nofollow">https:&#x2F;&#x2F;www.envoyproxy.io</a>
invisible超过 7 年前
We use Jenkins for releases, kubernetes for deployments if I understand the question correctly. We’d like to use something like linkerd to simplify finding dependencies.
lfalcao超过 7 年前
<a href="https:&#x2F;&#x2F;github.com&#x2F;zendesk&#x2F;samson" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;zendesk&#x2F;samson</a>
brango超过 7 年前
Master=stable and in prod, non-master branches=dev &amp; staging. Jenkins deploys automatically on git commits.
hguhghuff超过 7 年前
In what technical environment? More info needed.