I’ve got an old (but still not bad for dev needs) quad code with 16gb ram. This is quite a lot for running various small containerized services. However I’m wondering if this is considered a bad practice.<p>Cloud machines for something like this would be >$100 a month.<p>What security measures and other considerations would I need to keep in mind if I go this route?<p>For example I have a Emailer service that sends updates to users (needs to connect to my remote hosted DB)
Why would you rent a cloud instance that is as overprovisioned as your home computer? Price compare based upon what you'll actually use, not what you have available. You'd be surprised how much you can run on a few small VPSes or a container hosting service like fly.io
I've seen enough small businesses with "servers" built from desktops and laptops (and have even been the one doing it, in cases where the one needing it can't justify budgeting for a proper server but does need "something" and is fully aware that I offer nothing even vaguely resembling a warranty) that I'd say it's doable. Yes, it's "bad", but it's a massive money saver and "bad" is better than "nonexistent".<p>The big risk will be around hardware reliability. A desktop or laptop just ain't built like a server is. Knowing this, redundancy is key. Hell, it's key <i>anyway</i>, but it's even more acute of a need when you're using consumer hardware - especially since your average desktop/laptop won't have redundant power supplies and hot-swappable drive caddies and multiple NICs and all the other goodies that preserve server uptime. Hardware reliability is one of those things that PaaS providers largely abstract away for you, so keep that in mind, too.<p>My usual strategy for a "poor man's server farm" is to treat entire machines as disposable. Ain't like most laptops have the physical connectors for RAID anyway, and safeguarding data is what backups are for. If any component starts to fail, everything's migrated to a different cheap piece of shit and the old one gets thrown on the "fix it eventually" pile to be either repaired or e-wasted.
If you've got external users, I'd go cloud.<p>Hardware dies unpredictably. The cloud providers have figured out how to be highly resilient to it.<p>If you use your own hardware, there's a XX% chance you're going to have a really bad day sometime in the next year.
> However I’m wondering if this is considered a bad practice<p>It depends who you ask.<p>I do the same at home and it works like a charm, but I have no users beside myself and my SO.<p>If you have paying users and SLAs you'd better get that $100/month cloud machine or have a disaster recovery plan ready (and tested).<p>> What security measures and other considerations would I need to keep in mind if I go this route?<p>the usuals... disable password authentication, configure the firewall, do not share /var/run/docker.sock with containers, don't shut down SELinux, run periodic backups and test recovery procedures periodically etc.. Normal sysadmin stuff.
You can self host if you make sure to monitor for signs of hardware failure and ensure that replacement hardware + good backup strategies are always ready if you rely on it for hosting services for customers/other people.<p>If it's just for you and you have the hour or so per week to dedicate to updates and maintenance, then I don't see why not. I run two servers in my room and they've worked fine with minimal maintenance so far, except for a few old hard drives that needed replacing and a fan that I couldn't get replaced because of a screw I stripped years ago.<p>There are quite decent cloud machines out there if you just look beyond the big guys like AWS and GCloud. <a href="https://contabo.com/en/vps/" rel="nofollow">https://contabo.com/en/vps/</a> has decent servers for cheap, as does <a href="https://www.hetzner.com/cloud" rel="nofollow">https://www.hetzner.com/cloud</a><p>If your emailer service makes direct contact with destination SMTP servers, then running from home is probably not an option. If you use an external SMTP server to deliver mail to the destination servers then this won't be a problem.
I agree with the other commenters that overprovisioning (or underprovisioning) is a concern with the cloud, but the public cloud has long been more secure than on-prem data centers [1], [2], [3].<p>As for the cost, Reserved Instances can dramatically reduce your spend, with the caveat that you can get locked in 1 or 3 years. My company, Usage.AI, built a platform to solve this problem by automatically buying and selling Reserved Instances to get the price and flexibility benefits in one [4].<p>[1] <a href="https://www.infoworld.com/article/3010006/sorry-it-the-public-cloud-is-more-secure-than-your-data-center.html" rel="nofollow">https://www.infoworld.com/article/3010006/sorry-it-the-publi...</a>
[2] <a href="https://blogs.oracle.com/cloudsecurity/post/7-reasons-why-the-cloud-is-more-secure" rel="nofollow">https://blogs.oracle.com/cloudsecurity/post/7-reasons-why-th...</a>
[3] <a href="https://cloud.google.com/blog/products/identity-security/enterprises-trust-cloud-security" rel="nofollow">https://cloud.google.com/blog/products/identity-security/ent...</a>
[4] <a href="http://usage.ai" rel="nofollow">http://usage.ai</a>
You need to get some metrics for each of your containers. Rather than choosing a cloud machine that is similar to your current computer, consider micro-instances to host each of your services. That way, you can upscale only those instances that require additional capacity.<p>At $100+/month, I would like to think that you are generating sufficient revenue to cover those costs plus a return on your time and effort. $100/month over a year pays for a rather nice notebook.
I'm sure I'm missing something, but I feel like the main reasons to choose cloud over self-host are availability and bandwidth. The data center is probably more reliable for uptime than your home. If neither of those are your problem, then I suppose you should consider segregating the network that machine is running on from the rest of your devices in your home. So your server can't connect to your TV in the living room or something, for example.
IMHO,<p>the key-point here is to <i>automate</i> the setup/deployment/maintainance of your system and application, so you have a certain degree of independence of the systems running your stage in question.<p>for development: use whatever you see fit - who cares if the system is down, you got automation to set it up elsewhere if for example the hardware dies.<p>for production: this is "the other side" of the story, here you are aiming for availability etc.<p>use automation - which you already "showcased" / have as some kind of a PoC for your development-systems - to be able to quickly recover from major faults.<p>ad db: use a local test-database on your development-system.<p>you don't want to access prod for development-tests!!<p>idk. make a dump of prod every now & then - if prod-data is not sensitive -, or generate sufficient test-data, etc...<p>br,
v
Good hardware (reliable SSDs and HDDs) and backup solutions is the first step. Next you will need a proper internet connection, for example your ISP might block ports or put you behind a CGNAT. I created Hoppy Network to assist with this last step. It provides you a clean IPv4 and IPv6 address over WireGuard. Some networking background is recommended if you want to route multiple services.<p><a href="https://hoppy.network" rel="nofollow">https://hoppy.network</a>
What’s your best way to talk with you? I’m building a cost effective dev cloud vms that might fit your use case. It has a granular usage based metering and you’ll be charged almost nothing when the VM is idle.
Yes.<p>NextCloud is your data on your hardware.<p>Encrypt it, back it up to the cloud (someone else's computer) as offsite storage, but don't give read/write access to your calendar, contacts and emails.