I don't think this is worth it unless you are setting up your own CDN or similar. In the article, they exchange 1 to 4 stat calls for:<p>- A more complicated nginx configuration. This is no light matter. You can see in the comments that even the author got bugs in their first try. For instance, introducing an HSTS header now means you have to remember to do it in all those locations.<p>- Running a few regexes per request. This is probably still significantly cheaper than the stat calls, but I can't tell by how much (and the author hasn't checked either).<p>- Returning the default 404 page instead of the CMS's for any URL in the defined "static prefixes". This is actually the biggest change, both in user-visible behavior and in performance (particularly if a crazy crawler starts checking non-existing URLs ni bulk or similar). The article doesn't even mention this.<p>The performance gains for regular accesses are purely speculative because the author didn't make any effort to try and quantify them. If somebody has quantified the gains I'd love to hear about it though.
Didn't Apache2 also see some performance penalty due to them also allowing you to have configuration in .htaccess files which must be read in a similar way: <a href="https://httpd.apache.org/docs/current/howto/htaccess.html#when" rel="nofollow">https://httpd.apache.org/docs/current/howto/htaccess.html#wh...</a> (you can disable that and configure the web server similarly to how you would with Nginx, just config file(s) in a specific directory)<p>The likes of try_files across a bunch of web servers are pretty convenient though, as long as the performance penalty doesn't become a big deal.<p>Plus, I've found that it's nice to have api.myapp.com and myapp.com as separate bits of config, so that the ambiguity doesn't exist for anything that's reverse proxied and having as much of the static assets (for example, for a SPA) separate from all of that. Ofc it becomes a bit more tricky for server side rendering or the likes of Ruby on Rails, Laravel, Django etc. that try to have everything in a single deployment.
Sticking to this rule has served me well over the years:<p>- resources that are dynamically-generated are served by API endpoints, therefore known locations with predictable parameters<p>- everything else must be static files<p>And definitely no dynamic script as the fallback rule, it's too wasteful in an era of crawlers that ignore robots.txt and automated vulnerability scanners.<p>A backend must be resilient.
Speaking of NGINX directives that can make a big difference serving files, here is how you we use it to enforce access control:<p><a href="https://community.qbix.com/t/restricting-access-to-resources/195" rel="nofollow">https://community.qbix.com/t/restricting-access-to-resources...</a>