Somewhat unrelated but, there are some WordPress themes that are trying to keep the bytes sent minimum. One such example is: <a href="https://sustywp.com/" rel="nofollow">https://sustywp.com/</a> and there's an excellent article by its author: <a href="https://blog.jacklenox.com/2018/06/04/delivering-wordpress-in-7kb/" rel="nofollow">https://blog.jacklenox.com/2018/06/04/delivering-wordpress-i...</a>
I just recently started hosting my personal website in my closet on a laptop: <a href="https://notryan.com" rel="nofollow">https://notryan.com</a>.<p>I also host a Discourse forum. It's pretty snappy for being in my closet (<a href="https://forum.webfpga.com" rel="nofollow">https://forum.webfpga.com</a>) Beats paying DigitalOcean $15/month ($180/year!) for a machine with 2vCPU + 2 GB RAM (the minimum for Discourse).<p>I think more people should consider self-hosting, especially if it's not critical. It makes the Internet more diverse for sure.
Hi OP, you seem to be the author of the blog (according to your post history).
Your contact email seems not working so I will write it here:
Just a big thank you for you blog! I take so much pleasure to read it at least once a week! THIS is the internet I like!
Reading through the webpage, I was thinking that points 1, 2 and 4 really wasn't relevant. We're talking about bandwidth limitations being the key factor here - not the rest.<p>1. Page Loading Issues are irrelevant, unless it's because of large items served over limited bandwidth.
2. Static vs Dynamic webpages are irrelevant, if the pages themselves are small. Dynamic of course incurs some CPU on the server-side, but that is a machine not bandwidth issue.
3. Limiting the amount of data is obviously important.
4. Number of requests to the server is only relevant for the request-size, CPU and tcp-overhead (which can be alleviated via multiplexing).
5. Yes, do compress the pages.
6. Agree, website development kits often makes the pages much larger.
7. Certainly, but this shouldn't be necessary if you have good cache-headers.<p>One thing that was not mentioned, was ensuring that static items are cachable by the browser. This has a huge impact.
This is a tangent, but I want to add to the less-is-more vibe!<p>I run some surprisingly-spiky-and-high-traffic blogs for independent journalists and authors, that kind of thing. Lots of media files.<p>There are two ways this typically goes: either you use some platform, and try and get a custom domain name to badge it with, or else you imagine some complicated content-management system with app servers, databases, elastic search clusters etc?<p>At the time wordpress etc weren't attractive. I have no idea what that landscape is like now, or even what was so unattractive about wp then, but anyway...<p>So we're doing it with an old python tornado webserver with one core, 256MB RAM VM at a small hosting provider. (I think we started with 128MB RAM, but that offering got discontinued years ago. It might now be 512MB, I'd have to check. Whateever it is, its the smallest VM we can buy.)
The webserver is started-if-crashed using cron minute and flock -n.<p>The key part of the equation is that the hosting provider I use did away with monthly quotas. Instead, they just throttle bandwidth. So when the HN or some other crowd descends, the pages just take longer to load. There is never the risk of a nastygram asking for more money or threatening to turn off stuff or error messages saying some backend db is unavailable etc.<p>Total cost? Under $20/month. I think domains cost more than hosting.<p>The last time I even checked up on this little vm? More than a year ago, I think. Perhaps two? Hmm, maybe I should search for the ssh details...<p>My personal blog is static and is on gh-pages. A fine enough choice for techies.
Alternative.<p>Get a cheap openvz box from lowendbox which will cost you between 3-15$ a year.<p><a href="https://lowendbox.com/tag/yearly/" rel="nofollow">https://lowendbox.com/tag/yearly/</a>
Shameless plug for my 8kb website (if you don't include the 30x larger pdf of my resume).<p>S3 hosting + Cloudflare SSL free tiers<p><a href="https://saej.in/" rel="nofollow">https://saej.in/</a>
My personal web site spent years on slow cable/DSL, and I used Coral CDN at the time to deal with peak traffic. Today I have 1Gbps/200Mbps fiber and keep it running off a tiny free VPS with Cloudflare, and sometimes wonder if I need the VPS at all.<p>(One of the reasons I moved it off my LAN was security - not providing an ingress point, etc.)
> your website may occasionally miss potential traffic during "high-traffic" periods.<p>The thing with personal home websites is that there's really no actual problem if the site gets overloaded, or if it goes down for a day or a week or is only up intermittently at all.<p>These requirements of constant availability and massive scaling aren't universal requirements. It's okay if a wave of massive attention is more than your upstream can support. If people are interested they'll come back. If they don't it's fine too.
Another way would be to upload your site to your local IPFS node. Your files will be automatically cached by other IPFS nodes as people discover your site providing free load balancing and redundancy.<p>Your site will still be viewable even after you turn your local node off until all traffic goes to zero and the caches eventually expire.<p><a href="https://docs.ipfs.io/guides/examples/websites/" rel="nofollow">https://docs.ipfs.io/guides/examples/websites/</a>
After making a rare popular post (32K hits in a few hours) I moved my static site (a gohugo.io effort) to AWS Lightsail. For less than a fistful of dollars per month it’s someone else’s problem. I keep page size down (it’s text heavy) and so far I haven’t needed any CDN.<p>If I’m having trouble viewing a page on someone’s hammered server I either look in Google’s cache or use links/elinks to grab the text (usually what I’m interested in).
I do wonder how all this compares in overall costs and performance to just serving the static website from S3 or B2?<p><a href="https://www.grahn.io/posts/2020-02-08-s3-vs-b2-static-web-hosting/" rel="nofollow">https://www.grahn.io/posts/2020-02-08-s3-vs-b2-static-web-ho...</a>
I dither all my images on the front page: <a href="http://sprout.rupy.se" rel="nofollow">http://sprout.rupy.se</a><p>The platform is open-source: <a href="https://github.com/tinspin/sprout" rel="nofollow">https://github.com/tinspin/sprout</a>
To go along the self-hosting crowd, the whole decentralized web with alternatives to HTTP is solving issues like uptime and low upstream bandwidth by distributing distribution itself, the way bittorrent does. Dat (via beaker browser) and zeronet come to mind.<p>Contrapoint: content must be static
My ISP (Cox) doesn't allow inbound traffic on port 80. Anyone know any tricks of getting around that? I'm currently using a reverse proxy on a friends server to tunnel through an alternative port, but I'm looking for a better solution.
doesn’t putting CF in front of your home web server mostly solve this problem? CF will cache static assets for you and take all the pain of a traffic surge away.<p>also prevents leaking your home IP address.
> The SSL handshake alone can take as long as a third of a second.<p>Is this true? If so, then it's a good reason for me not to enable SSL/TLS on my sites that don't need it (e.g. read-only documents or blog posts).
The article features some very good advice, regardless of how you actually host the site, that should be followed by many other sites around the web...<p>It basically boils down to the old KISS principle "keep-it-stupid-simple".
For the kind of site described here, there are plenty of CDNs which will just treat it as a rounding error in terms of their running costs, if at all it costs anything, and give it to you for free
lowtech magazine [1] is an amazing example at sustainablity serving a website, they use solar powered only infra and the website is only available theoretically, if there's been enough sun to power the servers that day (In Barcelona, at least we've sun much of the year)<p>[1] <a href="https://www.lowtechmagazine.com" rel="nofollow">https://www.lowtechmagazine.com</a>
I ran garyshood.com off of a pentium 3 dell poweredge and a comcast connection in its peak (2009-2012). I couldn't offer an upload speed of more than 20-40kb/s without slowing down the house. but with traffic shaping rules and following most of what he's outlined here, it was easy. it wasn't even cpubound, it was just network bound, but I could still handle 100-300k unique visits a month. I did have to host the downloadable exe for the autoclicker on a third party for $3 a month, even at 120kb that thing would use too much house bandwidth.