> Ship first, worry about scale later<p>This is repeated constantly, but I fear that it is internalized as “write shitty code and throw money at it later.” If you have taken the time to learn your language well, you can avoid a lot of <i>really</i> bad decisions that don’t cost you additional time.<p>Similarly, on the infra side of things (where this advice is usually doled out), maybe take the time to have a modicum of understanding about the tools you’re building on. If you’re using a DBaaS, your vendor almost certainly has monitoring built-in, often for free, or a nominal cost. USE IT, and learn what it is you’re looking at. “The DB is slow” could be anything from excessive row locks due to improperly-held transactions to actually hitting an underlying resource limit – and for the latter, 9/10 it’s a symptom of something that’s misconfigured, or not understanding your RDBMS’ operation.<p>For example, do you have a write-heavy table with a UUIDv4 PK, lots of columns that are heavily indexed, and some medium-large JSON blobs in it? Congratulations, you’ve created Postgres’ (and MySQL, but for different reasons) worst nightmare. Every write is amplified by the indexes, and even if you’re doing an UPDATE and are only hitting one of the indexed columns, <i>all</i> of them will be rewritten. The UUIDv4 PK means your WAL traffic is going to skyrocket from all the full page writes, and if your JSON blobs are big enough to be unwieldy, but not big enough to have be TOASTed, that’s another huge amplification to writes. All of this can easily result in hitting IOPS limits, network bandwidth limits, or CPU saturation from additional queries piling up while this one is dealt with, and all of it could be easily avoided by having a basic understanding of your tooling.