TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: Can any Hetzner user please explain your workflow on Hetzner?

113 pointsby nerdyadventurerabout 2 years ago
I am thinking of trying out Hetzner for hosting front-ends, back-ends. I have some questions about the workflow on Hetzner.<p>How do you<p>- deploy from source repo? Terraform?<p>- keep software up to date? ex: Postgres, OS<p>- do load balancing? built-in load balancer?<p>- handle scaling? Terraform?<p>- automate backups? ex: databases, storage. Do you use provided backups and snapshots?<p>- maintain security? built-in firewall and DDoS protection?<p>If there is any open source automation scripts please share.

38 comments

alex7734about 2 years ago
Not hetzner but a similar provider:<p><pre><code> - Deploy by stopping the server, rsyncing in the changes, and starting the server. The whole thing is automated by script and takes 5 seconds which is acceptable for us. - Run apt upgrade manually biweekly or so. - We use client-side load balancing (the client picks an app server at random) but most cloud providers will give you a load balancer IP that transparently does the same thing (not for free though). - For scaling just manually rent more servers. - For backups we use a cronjob that does the backup and then uploads to MEGA - For security we setup a script that runs iptables-restore but this isn&#x27;t really all that necessary if you don&#x27;t run anything that listens on the network (except your own server obviously). - DDoS is handled transparently by our provider. </code></pre> While this might change if you&#x27;re super big and have thousands of servers, in my experience simple is best and &quot;just shell scripts&quot; is the simplest solution to most sysadmin problems.
评论 #35261528 未加载
vlaaadabout 2 years ago
ssh root@hetzner-server-ip &quot;cd my-server &amp;&amp; git pull &amp;&amp; .&#x2F;prepare.sh &amp;&amp; systemctl restart my.service &amp;&amp; journalctl -u my.service -f&quot;<p>To expand a little bit:<p>- It&#x27;s a very small service<p>- I use sqlite db<p>- Preparation step before the restart ensures all the deps are downloaded for the new repo state. I.e. &quot;a build step&quot;<p>- I use simple nginx in front of the web server itself<p>- Backups are implemented as a cron job that sends my whole db as an email attachment to myself<p>- journalctl shows how it restarted so I see it&#x27;s working
评论 #35260709 未加载
评论 #35260515 未加载
0xblinqabout 2 years ago
I only use it for side projects right now, and in the past for a real production application for which &quot;high availability&quot; was not a problem (I could do ocasional maintenance windows out of work hours). Here&#x27;s how I did it in case it helps you:<p>&gt; deploy from source repo? Terraform?<p>I use Dokku (<a href="https:&#x2F;&#x2F;dokku.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;dokku.com&#x2F;</a>), then the workflow is the same as if you&#x27;d be using Heroku<p>&gt; keep software up to date? ex: Postgres, OS<p>Automattic ubuntu updates + I once a week SSH to it and apt-get update, etc.<p>&gt; do load balancing? built-in load balancer?<p>I just don&#x27;t. I don&#x27;t need for the load of my projects.<p>&gt; handle scaling? Terraform?<p>Just vertical scaling for now. A single powerful server can do great before you might need to add more servers.<p>&gt; automate backups? ex: databases, storage. Do you use provided backups and snapshots?<p>I just enable the &quot;backup&quot; feature on their admin panel. Adds 20% to the cost but works great and it&#x27;s easy.<p>&gt; maintain security? built-in firewall and DDoS protection?<p>I only expose the HTTP(s) and SSH ports, and I also have setup fail2ban for bruteforce attacks.<p>&gt; If there is any open source automation scripts please share.<p>Dokku.
nemo136about 2 years ago
&gt; 50 machines at hetzner<p>- install machines with ansible (using hetzner scripts for OS install)<p>- machines communicate over vswitch&#x2F;vlans, external interfaces disabled whenever possible. Pay attention to the custom mtu trick.<p>- harden machines, unattended-upgrades mandatory on each machine<p>- ssh open with IP whitelists from iptables on gateways<p>- machines organized as k8s clusters, took ~1 year to have everything working cleanly<p>- everything deployed as k8s resources (kustomize, fluxcd, gitops)<p>- use keepalived for external IPs with floating IPs for ingress on 3 machines per cluster<p>Machines are managed as cattle, it takes &lt;1h+ hetzner provisioning time to add as many machines as we need.
评论 #35263543 未加载
mtmailabout 2 years ago
<a href="https:&#x2F;&#x2F;github.com&#x2F;hetznercloud&#x2F;awesome-hcloud&#x2F;">https:&#x2F;&#x2F;github.com&#x2F;hetznercloud&#x2F;awesome-hcloud&#x2F;</a> collects various devops tools for Hetzner Cloud.
e12eabout 2 years ago
The recent demo of MRSK from 37signals used Hetzner as the first example:<p>&gt; Introducing MRSK - 37signals way to deploy<p>T<a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=LL1cV2FXZ5I">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=LL1cV2FXZ5I</a>
jasonvorheabout 2 years ago
It&#x27;s not even close to major public cloud providers, but this is my setup:<p>* <a href="https:&#x2F;&#x2F;github.com&#x2F;kube-hetzner&#x2F;terraform-hcloud-kube-hetzner">https:&#x2F;&#x2F;github.com&#x2F;kube-hetzner&#x2F;terraform-hcloud-kube-hetzne...</a> (Terraform, Kubernetes bootstrap)<p>* Flux for CI<p>* nginx-ingress + Hetzner Loadbalancer (thanks to <a href="https:&#x2F;&#x2F;github.com&#x2F;hetznercloud&#x2F;hcloud-cloud-controller-manager">https:&#x2F;&#x2F;github.com&#x2F;hetznercloud&#x2F;hcloud-cloud-controller-mana...</a>)<p>* Hetzner storage volumes (thanks to <a href="https:&#x2F;&#x2F;github.com&#x2F;hetznercloud&#x2F;csi-driver">https:&#x2F;&#x2F;github.com&#x2F;hetznercloud&#x2F;csi-driver</a>)<p>Kube-Hetzner supports Hetzner Cloud loadbalancers and volumes out of the box, though it also supports other components.
cstuderabout 2 years ago
For my hobby server:<p><pre><code> - Running dokku with Heroku Buildpacks to deploy both from source and to run Docker images behind an ngnix reverse proxy. - Autoupgrade apt&#x27;s, manually updating the OS. - No load balancing. - No scaling. - Automated backups with restic&#x2F;rclone to OneDrive. - Hetzner firewall, no DDoS protection.</code></pre>
Y_Yabout 2 years ago
Manually provision long-running VMs and manage containers with yacht.sh and that&#x27;s it really. There&#x27;s nothing special about Hetzner that makes it qualitatively different from any other cloud provider, except for enterprises features.
mesmertechabout 2 years ago
- Deploy using docker swarm, CI ssh into machine, pull repo and run<p>- don&#x27;t remember the last time I updated lol<p>- traefik + worker nodes on docker swarm<p>- again docker swarm<p>- I have a cronjob that makes backup using postgres, then uploads it to a digitalocean spaces, you can just use S3 as well<p>- I&#x27;m using cloudflare in front of server, but I also use inbuilt firewall as I host a postgres server with hetzner(only allow traffic from the web server worker nodes)
评论 #35260979 未加载
simon83about 2 years ago
&gt; maintain security? built-in firewall and DDoS protection?<p>I have a Hetzner dedicated server (not the Cloud offering) and I setup OpnSense as an all-in-one routing and firewall solution in a separate VM. All incoming and outgoing traffic goes through this OpnSense VM, which acts as default gateway for the host system and all other VMs&#x2F;Docker containers. You either need to book a 2nd public IPv4 address (or just use IPv6 for free if that is good enough for your use case, since each server comes with a IPv6 &#x2F;64 subnet), or if you want to just have 1 IPv4 address you could do some Mac spoofing on the main eth interface of the host OS and give the actual Mac address and public IP to the OpnSense&#x27;s WAN interface. This is necessary because Hetzner has some Mac address filtering in place, meaning only the Mac address connected to the public IP is allowed to make traffic.
sshineabout 2 years ago
I provision a single VPS that acts as Terraform &amp; Ansible control:<p><pre><code> - Store and run Terraform setup in git - Store and distribute SSH keys - Store and run Ansible scripts for bootstrapping (e.g. Kubernetes clusters on dedicated, or more VPS&#x27;es) - Host VPN and some low-intensity services (I&#x27;d delegate both of these if I had a bigger budget) </code></pre> Specifically, this replaces the use of Terraform Cloud.<p>I enjoyed using Terraform Cloud for a more cloudy setup with easy GitHub pull-request integration at a past employer.<p>But I&#x27;m specifically aiming for simplicity here. It doesn&#x27;t scale as well to a team of 2+ without establishing conventions.<p>I haven&#x27;t explored what self-hosted alternatives there are to Terraform Cloud.
评论 #35261390 未加载
artellectualabout 2 years ago
I have been working on <a href="https:&#x2F;&#x2F;instellar.app" rel="nofollow">https:&#x2F;&#x2F;instellar.app</a> to solve this very problem. It allows you to use s3 compatible storage and your compute &#x2F; database provider. So you can use hetzner or digitalocean or AWS or google cloud, anything you want. For your database you can use digitalocean’s managed &#x2F; Aiven.io &#x2F; RDS &#x2F; Google cloud SQL. This tool brings it all together and enable you simply focus on shipping code.<p>It does load balancing &#x2F; automatic ssl issuing out of the box. It will also allow you to scale horizontally. I’m working towards making it public soon.
评论 #35262095 未加载
notpushkinabout 2 years ago
Not a Hetzner user, but I believe you can do pretty much anything you can use any other VPS for. I deploy all my stuff on a single server using <a href="https:&#x2F;&#x2F;lunni.dev&#x2F;" rel="nofollow">https:&#x2F;&#x2F;lunni.dev&#x2F;</a> (disclaimer: I&#x27;m also the author of Lunni). It is a web interface over Docker Swarm with sane defaults for working with web apps.<p>- Deploy from source repo: Lunni docs guide you how to setup CI building your repo as a docker image, and you can create a webhook that pulls it and redeploys.<p>- Scaling, load balancing: in theory you can just throw more servers in the swarm, tweak your configuration a bit and it should work. However, I&#x27;ve yet to run past what a single, moderately beefy server can handle :&#x27;)<p>- Automate backups: definitely on my roadmap! Right now I&#x27;m configuring them manually on critical services, and doing them manually every now and then using the Vackup script.<p>- Maintain security: Docker&#x27;s virtual networks acts as a de-facto firewall here. In Lunni, you only expose services you need to the reverse proxy (for HTTP), and if you absolutely must expose some ports directly (e. g. SSH for Git), you have to explicitly list them.<p>Some other similar alternatives to consider: Dokku, Coolify, Portainer with Traefik &#x2F; Caddy &#x2F; nginx-gen. I&#x27;ll be glad if you choose Lunni though :-) Let me know if you have any questions!
blueluabout 2 years ago
For dedicated servers:<p>- deploy from source repo? Terraform?<p>* local build server, which rsyncs to application servers (e.g. files), or through docker registry * scripts to start&#x2F;stop&#x2F;restart services * centralised database on which services run on which servers, which serves as base where specific applications run<p>- keep software up to date? ex: Postgres, OS<p><i>ansible for automated installs (through hetzner API) </i>ansible scripts to execute commands on servers (e.g. update software, or adapt firewall when new hosts are being added)<p>- do load balancing? built-in load balancer? * proxy to route requests to multiple backend servers (e.g nginx) * flexi ip (needs to manually mapped to new server in case of failure over API, so you need to check yourself that the IP is reachable)<p>- handle scaling? Terraform?<p>* more servers<p>- automate backups? ex: databases, storage. Do you use provided backups and snapshots?<p>* Seperate hdfs cluster, which allows production nodes to write once and read data, but not delete&#x2F;overwrite any data. * For less data, you could also use their backup servers. * The &quot;backups and snapshots&quot; feature you mention is only available for vservers, not for dedicated servers.<p>- maintain security? built-in firewall and DDoS protection?<p>* Hetzner router Firewall * Software firewall (managed through ansible) * Don&#x27;t use their VLAN feature, as there seems to be often some problems with connectivity (see their forum). * Never had DDos issues<p>- monitoring of failures: * internal tool to monitor hardware and software issues (e.g. wrongly deployed software, etc...).
Gordonjcpabout 2 years ago
I run traefik in docker, and then I run various other random shit including my stepdaugher&#x27;s Minecraft server in docker.<p>Every couple of months I remember to pay the bill, then start browsing the auction page, then think &quot;hey that thing isn&#x27;t much more than I&#x27;m paying now, maybe I should upgrade...&quot;, but mostly I just stick with things as they are.
creshalabout 2 years ago
It really depends a lot on what you get from Hetzner. Their cloud offerings are kinda weird (few features, high prices), so we buy dedicated servers and run our own containers on top of that.<p>Deploy from source: Gitlab CI builds and deploys containers<p>Keep software uptodate: Deploy new containers &#x2F; migrate all containers from a host to upgrade that with OS tools (Debian for us, so just apt dist-upgrade)<p>Load balancing: nginx container<p>Scaling: Hasn&#x27;t really been an issue for us yet, but terraform&#x2F;k8s work fine from what I&#x27;ve heard<p>Backups: Dedicated SX server pulls backups via rsnapshot, including DB dumps. All data is on minutely replicated ZFS pools, so we got short-term snapshots for free anyway.<p>Security: Still on IPTables and Fail2ban for on-system stuff. DDoS protection from Hetzner itself is okay-ish, but for really critical sites Akamai or Cloudflare are still the safer choices. Both work fine.
RamblingCTOabout 2 years ago
We use hetzner cloud with terraform and a self-hosted kubernetes cluster. Everything else is self-baked obviously.
throwaway81523about 2 years ago
Lots of fancy scripts around. I don&#x27;t use any, I just configure new servers with an ansible playbook from my laptop, and generally do stuff by hand after that. I don&#x27;t have anything I&#x27;d call &quot;production&quot;, just personal and dev stuff. I have a cheap dedicated server that I use as a beataround and for long running computations, and occasionally spin up Hetzner cloud instances for temporary usages. I don&#x27;t automate backups. I have 5TB of backup space in Hetzner Storage Box (10 euro&#x2F;month for that!) and manually backup to it with Borg Backup and a few small shell commands in the .bashrc in my ansible script.
leephillipsabout 2 years ago
<p><pre><code> - deploy from source repo? Terraform? rsync - keep software up to date? ex: Postgres, OS apt-get - automate backups? ex: databases, storage. rsync, pg_dump - maintain security? systemd-nspawn</code></pre>
bckygldstnabout 2 years ago
- Applications and DBs run in docker containers. Deploying is basically git pull &amp;&amp; docker-compose up --build -d<p>- Apt auto-upgrades, other software updates are handled in docker. The only software on the machine is haproxy, git and docker for deploys, newrelic and vector for monitoring.<p>- Haproxy runs on the server to route requests to docker containers. Cloudflare loadbalancing routes to servers.<p>- Scaling is avoided through over-provisioning cheap Hetzner machines. Adding new machines is done so rarely that a bash script is fine.<p>- DB backups are done in docker.<p>- Ufw locks down to ports 22, 80, 443, and DB ports. Because docker can interact with firewalls in surprising ways, I also replicate the rules in the Hetzner filewall.
jmstfvabout 2 years ago
Migrated from Linode to Hetzner. My workflow has stayed the same:<p>* Deploying using Git and Capistrano: `git push &amp;&amp; cap production deploy` (aliased to cpd)<p>* Using Hetzner backups + daily backups to Tarsnap using cron<p>* Updating software by SSH-ing into a server and updating apt packages; I update Ruby gems locally<p>* For security, built-in firewall + ufw, two-factor authentication, public key-only authentication (SSH key is protected with a password), SSH running on a non-standard port with a non-standard username.<p>* I use sqlite as a database and caddy as a web server
saccharoseabout 2 years ago
I am not sure what level of abstraction and automation you are aiming for but there is a pretty neat project for setting up a kubernetes cluster including automatic updates in hetzner [1]. Even if it exceeds your requirements you can scrape it to answer many of your questions.<p>[1] <a href="https:&#x2F;&#x2F;github.com&#x2F;kube-hetzner&#x2F;terraform-hcloud-kube-hetzner">https:&#x2F;&#x2F;github.com&#x2F;kube-hetzner&#x2F;terraform-hcloud-kube-hetzne...</a>
sergioisidoroabout 2 years ago
I&#x27;ve been using docker swarm + traefik + portainer and I&#x27;m quite happy. I orchestrate everything with Ansible [1]. The only manual process I have is provisioning the servers &#x2F; load balancers.<p>It provides a super nice balance between going all manual VPS and going all on the kubernetes cool aid<p>[1] <a href="https:&#x2F;&#x2F;github.com&#x2F;sergioisidoro&#x2F;honey-swarm">https:&#x2F;&#x2F;github.com&#x2F;sergioisidoro&#x2F;honey-swarm</a>
KingOfCodersabout 2 years ago
<p><pre><code> - deploy from source repo? Github copy Go binary - keep software up to date? Using Hetzner Cloud + hosted Postgres - do load balancing? Hetzner LB + DNSMadeEasy LB failover - handle scaling? I don&#x27;t need to scale fast - automate backups? Snapshots + hosted Postgres - maintain security? SSH on other port, Hetzner private networks, built-in firewall and DDoS protection</code></pre>
nurettinabout 2 years ago
I just ssh &#x27;git pull &amp;&amp; .&#x2F;deploy.sh&#x27; which rolls back on deploy error.<p>traffic: no ddos protection, no load balancing<p>backups: daily automated backups provided by host. no incrementals.<p>update: unattended updates, software is tested doesn&#x27;t break when databases and message queues restart due to unattended upgrades.<p>security: intact selinux, ufw, proper users and permissions.
styrenabout 2 years ago
Sorry for this blatant self-promotion. If you&#x27;re looking for managed Kubernetes I&#x27;m building <a href="https:&#x2F;&#x2F;symbiosis.host" rel="nofollow">https:&#x2F;&#x2F;symbiosis.host</a> which is a service built on top of Hetzner, with support for terraform, load balancers, storage, etc.
sirodohtabout 2 years ago
Deployment from a bash script that ssh&#x27;s into the hetzner VPS and git pulls the data and restarts the server.<p>OS kept up-to-date manually.<p>No load balancing necessary, it&#x27;s one server.<p>No scaling necessary, it&#x27;s a few thousand users.<p>Backups: cron with script that s3-compatible copies over to off-site cloud every 6 hours.<p>Security: firewall yes, DDoS protection no.
bestestabout 2 years ago
I use their instance to run caprover for all my apps. That&#x27;s basically about it. I use hetzner&#x27;s backup service, it saved me once recently.<p>DDoS protection could might be off-loaded to CloudFlare, don&#x27;t need it personally.<p>I don&#x27;t need to scale yet. But I believe caprover is somewhat scaleable.<p>Security? As others said, SSH keys.
kjuulhabout 2 years ago
caddy, simple docker compose runtimes with watchtowerrr for updates.<p>Hetzner is just a bunch of vms, they are all connected over wireguard for ease of use. UFW at the edge for locking down ports.<p>No DDoS protection, but I can turn it on in cloudflare which I use for DNS.
PaywallBusterabout 2 years ago
Ansible for server configuration&#x2F;changes&#x2F;deployments<p>Rundeck to automate&#x2F;schedule jobs&#x2F;deployments&#x2F;upgrades or scale deployments (to fleet of servers)
johne20about 2 years ago
Question for those who encrypt disk on Hetzner with LUKS. How can you get it to auto assign private IP from DHCP on boot?
t312227about 2 years ago
hmmm, mainly:<p>* ansible for CM (first 4 points)<p>btw. i don&#x27;t do any deploys from source-repos, either build packages and use your favorite distributions package-mgmt or use containers.<p>* some shell&#x2F;awk&#x2F;perl&#x2F;python-scripts for backups &amp; security-related stuff :)
js4everabout 2 years ago
It seems you could be interested in a fully managed service over Hetzner handling security&#x2F;firewall&#x2F;monitoring&#x2F;alerts&#x2F;backups but also OS &#x2F; Software updates and CI&#x2F;CD pipelines from your repos<p>Please check: <a href="https:&#x2F;&#x2F;elest.io" rel="nofollow">https:&#x2F;&#x2F;elest.io</a><p>Disclaimer: I&#x27;m CTO &amp; founder
x86hacker1010about 2 years ago
I’ve been working on a new fully automated setup with 1 click.<p>Right now I provision my nodes automatically with Terraform. I use cloud init scripts during machine initialization and an adhoc remote provisioner for some firewall stuff and config updates after complete.<p>This is for boot. For configuration management I’m working on getting my Saltstack complete and easy to use.<p>Saltstack can be used like chef&#x2F;ansible but it’s much more intuitive to me and very flexible. This is for automating and managing package installation on my nodes, firewall rules, grouping nodes by config, etc.<p>What’s also cool with salt is you can have it make changes based on a web hook (salt reactor), ie merging commit into master.<p>My plan is to basically version control everything into salt so things like VPN setup, software, alerts are all automatically setup. I would love to extend this to also manage a NAS with automated backups.<p>Tl;Dr I am migrating my flow to be automated deployments from GitHub using Ansible to automated provisioning and deployments using Salt, Terraform and GitHub&#x2F;Gitea
Udoabout 2 years ago
Are we talking about their cloud services or dedicated servers? I (and a couple of clients) use their dedicated servers, the procedure is the same as with any bare metal hosting. Here&#x27;s the setup of my own servers (one at Falkenstein data center and one at Helsinki). My use case is small apps, with a couple hundred concurrent users at most. If you need a more dramatic infrastructure that scales up automatically and auto-deploys software left and right, that&#x27;s a whole different ball game.<p>- Proxmox as the base OS, stock install. Close every port except SSH, 80, 443 (alternatively you may want to go with Wireguard instead of SSH). There is an nginx instance running in front of the containers, it passes data along to them as per config. Otherwise, nothing is reachable from the outside.<p>- Servers are on Proxmox containers, mostly also Nginx, some Nodejs, some other, you know the drill. The containers are pretty low overhead, so you can implement basically any deployment strategy in that environment. They&#x27;re also easy to back up and to replicate to other machines.<p>&gt; keep software up to date? ex: Postgres, OS<p>I run a periodic &quot;apt update &amp;&amp; apt upgrade -y &amp;&amp; apt autoremove -y&quot; as a cron job on most containers. Some configurations tend to break occasionally, so I do those specific ones manually or with additional scripts. I have a repo of scripts and snippets that I use everywhere, just little hacks that accumulated over the years because they automate useful things.<p>&gt; do load balancing? built-in load balancer?<p>That depends on where your loads are, and what the structural needs of your applications are. If this is about external web requests to a mostly read-heavy application, I highly suggest using a CDN such as Cloudflare rather than rolling your own. That being said, Nginx makes load balancing pretty painless.<p>&gt; automate backups? ex: databases, storage. Do you use provided backups and snapshots?<p>Their storage offering is pretty okay, but I would consider restoring a whole-system backup a last resort. Proxmox has built-in support for container snapshots&#x2F;backups, which gives you more granular control. These snapshots are also easy to rsync periodically to another host. If the physical machine dies, you just start the container on another host from a recent backup. There are HA options for this on Proxmox if you link more than one host into a cluster (which is overkill for most setups).<p>&gt; maintain security? built-in firewall and DDoS protection?<p>Close down your ports. No complicated firewall rules, either. Just block anything that isn&#x27;t directed at one of your 3 necessary ports. With DDoS protection: don&#x27;t roll your own, use a CDN. Also, install only things you can audit or come from a reasonably safe source. For instance, I would highly discourage running npm installs&#x2F;updates unsupervised. If you have a production app that <i>needs</i> to work and <i>needs</i> to be reasonably secure, don&#x27;t automatically pull data from free-for-all package managers - deploy them with reviewed or known-good versions hard locked (or deploy them with dependencies already included).<p>As a final tip: Hetzner servers come with RAID setups (usually RAID1). Monitor the status of those drives! If one fails, tell them to replace it. They will usually do it within the hour on a running system.
anciequeabout 2 years ago
We use Docker Swarm for our deployments, so I will answer the questions based on that.<p>We have built some tooling around setting up and maintaining the swarm using ansible [0]. We also added some Hetzner flavour to that [1] which allows us to automatically spin up completely new clusters in a really short amount of time.<p>deploy from source repo:<p>- We use Azure DevOps pipelines that automate deployments based on environment configs living in an encrypted state in Git repos. We use [2] and [3] to make it easier to organize the deployments using `docker stack deploy` under the hood.<p>keep software up to date:<p>- We are currently looking into CVE scanners that export into prometheus to give us an idea of what we should update<p>load balancing:<p>- depending on the project, Hetzner LB or Cloudflare<p>handle scaling:<p>- manually, but i would love to build some autoscaler for swarm that interacts with our tooling [0] and [1]<p>automate backups:<p>- docker swarm cronjobs either via jobs with restart condition and a delay or [4]<p>maintain security:<p>- Hetzner LB is front facing. Communication is done via encrypted networks inside Hetzner private cloud networks<p>- [0] <a href="https:&#x2F;&#x2F;github.com&#x2F;neuroforgede&#x2F;swarmsible">https:&#x2F;&#x2F;github.com&#x2F;neuroforgede&#x2F;swarmsible</a><p>- [1] <a href="https:&#x2F;&#x2F;github.com&#x2F;neuroforgede&#x2F;swarmsible-hetzner">https:&#x2F;&#x2F;github.com&#x2F;neuroforgede&#x2F;swarmsible-hetzner</a><p>- [2] <a href="https:&#x2F;&#x2F;github.com&#x2F;neuroforgede&#x2F;nothelm.py">https:&#x2F;&#x2F;github.com&#x2F;neuroforgede&#x2F;nothelm.py</a><p>- [3] <a href="https:&#x2F;&#x2F;github.com&#x2F;neuroforgede&#x2F;docker-stack-deploy">https:&#x2F;&#x2F;github.com&#x2F;neuroforgede&#x2F;docker-stack-deploy</a><p>===================<p>EDIT - about storage:<p>We use cloud volumes.<p>For drivers:<p>We use <a href="https:&#x2F;&#x2F;github.com&#x2F;costela&#x2F;docker-volume-hetzner">https:&#x2F;&#x2F;github.com&#x2F;costela&#x2F;docker-volume-hetzner</a> which is really stable.<p>CSI support for Swarm is in beta as well and already merged in the Hetzner CSI driver (<a href="https:&#x2F;&#x2F;github.com&#x2F;hetznercloud&#x2F;csi-driver&#x2F;tree&#x2F;main&#x2F;deploy&#x2F;docker-swarm">https:&#x2F;&#x2F;github.com&#x2F;hetznercloud&#x2F;csi-driver&#x2F;tree&#x2F;main&#x2F;deploy&#x2F;...</a>). There are some rough edges atm with Docker + CSI so I would stick with docker-volume-hetzner for now for prod usage.<p>Disclaimer: I contributed to both repos.
KronisLVabout 2 years ago
I use Hetzner, Contabo, Time4VPS and other platforms in pretty much the same way (as IaaS VPS providers on top of which I run software, as opposed to SaaS&#x2F;PaaS), but here&#x27;s a quick glance at how I do things, with mostly cloud agnostic software.<p>&gt; deploy from source repo? Terraform?<p>Personally, I use Gitea for my repos and Drone CI for CI&#x2F;CD.<p>Gitea: <a href="https:&#x2F;&#x2F;gitea.io&#x2F;en-us&#x2F;" rel="nofollow">https:&#x2F;&#x2F;gitea.io&#x2F;en-us&#x2F;</a><p>Drone CI: <a href="https:&#x2F;&#x2F;www.drone.io&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.drone.io&#x2F;</a><p>Some might prefer Woodpecker due to licensing: <a href="https:&#x2F;&#x2F;woodpecker-ci.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;woodpecker-ci.org&#x2F;</a> but honestly most solutions out there are okay, even Jenkins.<p>Then I have some sort of a container cluster on the servers, so I can easily deploy things: I still like Docker Swarm (projects like CapRover might be nice to look at as well), though many might enjoy the likes of K3s or K0s more (lightweight Kubernetes clusters).<p>Docker Swarm: <a href="https:&#x2F;&#x2F;docs.docker.com&#x2F;engine&#x2F;swarm&#x2F;" rel="nofollow">https:&#x2F;&#x2F;docs.docker.com&#x2F;engine&#x2F;swarm&#x2F;</a> (uses the Compose spec for manifests)<p>K3s: <a href="https:&#x2F;&#x2F;k3s.io&#x2F;" rel="nofollow">https:&#x2F;&#x2F;k3s.io&#x2F;</a><p>K0s: <a href="https:&#x2F;&#x2F;k0sproject.io&#x2F;" rel="nofollow">https:&#x2F;&#x2F;k0sproject.io&#x2F;</a> though MicroK8s and others are also okay.<p>I also like having something like Portainer to have a GUI to manage the clusters: <a href="https:&#x2F;&#x2F;www.portainer.io&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.portainer.io&#x2F;</a> for Kubernetes Rancher might offer more features, but will have a higher footprint<p>It even supports webhooks, so I can do a POST request at the end of a CI run and the cluster will automatically pull and launch the latest tagged version of my apps: <a href="https:&#x2F;&#x2F;docs.portainer.io&#x2F;user&#x2F;docker&#x2F;services&#x2F;webhooks" rel="nofollow">https:&#x2F;&#x2F;docs.portainer.io&#x2F;user&#x2F;docker&#x2F;services&#x2F;webhooks</a><p>&gt; keep software up to date? ex: Postgres, OS<p>I build my own base container images and rebuild them (with recent package versions) on a regular basis, which is automatically scheduled: <a href="https:&#x2F;&#x2F;blog.kronis.dev&#x2F;articles&#x2F;using-ubuntu-as-the-base-for-all-of-my-containers" rel="nofollow">https:&#x2F;&#x2F;blog.kronis.dev&#x2F;articles&#x2F;using-ubuntu-as-the-base-fo...</a><p>Drone CI makes this easy to have happen in the background, as long as I don&#x27;t update across major versions, or Maven decides to release a new version and remove their old version .tar.gz archives from the downloads site for some reason, breaking my builds and making me update the URL: <a href="https:&#x2F;&#x2F;docs.drone.io&#x2F;cron&#x2F;" rel="nofollow">https:&#x2F;&#x2F;docs.drone.io&#x2F;cron&#x2F;</a><p>Some images like databases etc. I just proxy to my Nexus instance, version upgrades are relatively painless most of the time, at least as long as I&#x27;ve set up the persistent data directories correctly.<p>&gt; do load balancing? built-in load balancer?<p>This is a bit more tricky. I use Apache2 with mod_md to get Let&#x27;s Encrypt certificates and Docker Swarm networking for directing the incoming traffic across the services: <a href="https:&#x2F;&#x2F;blog.kronis.dev&#x2F;tutorials&#x2F;how-and-why-to-use-apache-httpd-in-2022" rel="nofollow">https:&#x2F;&#x2F;blog.kronis.dev&#x2F;tutorials&#x2F;how-and-why-to-use-apache-...</a><p>Some might prefer Caddy, which is another great web server with automatic HTTPS: <a href="https:&#x2F;&#x2F;caddyserver.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;caddyserver.com&#x2F;</a> but the Apache modules do pretty much everything I need and the performance has never actually been too bad for my needs. Up until now, applications themselves have always been the bottleneck, actually working on a blog post about comparing some web servers in real world circumstances.<p>However, making things a bit more failure resilient might involve just paying Hetzner (in this case) to give you a load balancer: <a href="https:&#x2F;&#x2F;www.hetzner.com&#x2F;cloud&#x2F;load-balancer" rel="nofollow">https:&#x2F;&#x2F;www.hetzner.com&#x2F;cloud&#x2F;load-balancer</a> which will make everything less painless once you need to scale.<p>Why? Because doing round robin DNS with the ACME certificate directory accessible and synchronized across multiple servers is a nuisance, although servers like Caddy attempt to get this working: <a href="https:&#x2F;&#x2F;caddyserver.com&#x2F;docs&#x2F;automatic-https#storage" rel="nofollow">https:&#x2F;&#x2F;caddyserver.com&#x2F;docs&#x2F;automatic-https#storage</a> You could also get DNS-01 challenges working, but that needs even more work and integration with setting up TXT records. Even if you have multiple servers for resiliency, not all clients would try all of the IP addresses if one of the servers is down, although browsers should: <a href="https:&#x2F;&#x2F;webmasters.stackexchange.com&#x2F;a&#x2F;12704" rel="nofollow">https:&#x2F;&#x2F;webmasters.stackexchange.com&#x2F;a&#x2F;12704</a><p>So if you care about HTTPS certificates and want to do it yourself with multiple servers having the same hostname, you&#x27;ll either need to get DNS-01 working, do some messing around with shared directories (which may or may not actually work), or will just need to get a regular commercial cert that you&#x27;d manually propagate to all of the web servers.<p>From there on out it should be a regular reverse proxy setup, in my case Docker Swarm takes care of the service discovery (hostnames that I can access).<p>&gt; handle scaling? Terraform?<p>None, I manually provision how many nodes I need, mostly because I&#x27;m too broke to hand over my wallet to automation.<p>They have an API that you or someone else could probably hook up: <a href="https:&#x2F;&#x2F;docs.hetzner.cloud&#x2F;" rel="nofollow">https:&#x2F;&#x2F;docs.hetzner.cloud&#x2F;</a><p>&gt; automate backups? ex: databases, storage. Do you use provided backups and snapshots?<p>I use bind mounts for all of my containers for persistent storage, so the data is accessible on the host directly.<p>Then I use something like BackupPC to connect to those servers (SSH&#x2F;rsync) and pull data to my own backup node, which then compresses and deduplicates the data: <a href="https:&#x2F;&#x2F;backuppc.github.io&#x2F;backuppc&#x2F;" rel="nofollow">https:&#x2F;&#x2F;backuppc.github.io&#x2F;backuppc&#x2F;</a><p>It was a pain to setup, but it works really well and has saved my hide dozens of times. Some might enjoy Bacula more: <a href="https:&#x2F;&#x2F;www.bacula.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.bacula.org&#x2F;</a><p>&gt; maintain security? built-in firewall and DDoS protection?<p>I personally use Apache2 with ModSecurity and the OWASP ruleset, to act as a lightweight WAF: <a href="https:&#x2F;&#x2F;owasp.org&#x2F;www-project-modsecurity-core-rule-set&#x2F;" rel="nofollow">https:&#x2F;&#x2F;owasp.org&#x2F;www-project-modsecurity-core-rule-set&#x2F;</a><p>You might want to just cave in and go with Cloudflare for the most part, though: <a href="https:&#x2F;&#x2F;www.cloudflare.com&#x2F;waf&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.cloudflare.com&#x2F;waf&#x2F;</a>
评论 #35286541 未加载