I feel the whole debate is overkill: 99% of businesses/systems will never have so much data that they NEED to use uuid's. I personally don't like using integers for keys either as I've been burnt by them before. I also doubt any software I build today or have built in the last 10 years will be used 100 years from now.<p>Recently I built a new system (typical business-type backend) and forced to use sqlite + C# + dapper. Using this combination I cannot use guid/uuid as dapper cannot properly map it back to c# from sqlite, and my dislike of int's got me thinking. I have a random string generator (have used it for years for things like OTP's and other reference numbers), where I give it an alphabet + length of the desired string. Using 8 to 12 characters, I can get a few million unique permutations. That is, if used as a primary key, few million per database table. Then I hear in the back of my head, guys from work who would argue I would run out of unique combinations or would have to do lookups to see if they exist. So I decided slap the year and month on it as a prefix, so a key might look like this: 2105HSUAMWPA. This gets indexed really well too and there is some inherent information that can be seen from looking at the key: Year 21, Month 5 and then the unique bits.It's basically 4 lines of code that gets called on every new database entity. I think it will be easy to shard/partition the data too if the need arise in the future, by simply looking at the first 4 digits.<p>Thus to summarize:<p>Data is sliced by entity type (customer, invoice, etc), then by date (2105 for May 2021) then by unique string.<p>What do you guys think about this approach? Anyone been burnt by something like this?