Single most used is PostgreSQL with Redis as a cache store for most cases.<p>But the more nuanced answer is, it depends on the project and the goals. I have used MSSQL, MySQL, PostgreSQL, Mongo and Cassandra all as primary data stores. I have utilized ElasticSearch, Redis, couchbase and memcached for specific caching or search purposes but never as a primary data store as they aren't designed for that IMO.<p>Today, primarily I rely on PostgreSQL as a default, with other databases used for specific purposes. I follow a different convention than the one large database for everything, even if I have a monolith application. I like to use small independent databases that are designed for the specific type of workflow for the task. e.g. user auth has different needs than say logging data or any high write low read datasets. Also, I personally like this because it forces defined lines in the application space for where data resides and the intended uses.<p>I think the thing I have learned to be better at over the past say 10 years especially, is to use the right datastore for the right job, versus a one size fits all. At the same time, I still try to keep the number of different type of data stores to the shear minimum required. So sometimes it maybe multiple instances of say PostgreSQL but each setup differently based on workload and access needs, or maybe it is PostgreSQL and Mongo for the strengths of each. Redis is almost always my caching engine now for any data stores.<p>In the 80's and early 90's nearly everything was file based in small companies (e.g. we wrote our own data stores), then we moved to ISAM then SQL in the mid-90's. In the 80's and early 90's SQL was out of the reach for a lot of smaller companies, license cost + dba etc. In the mid to late 90's we got distributed object stores (the first real NoSQL databases IMO), like Versant (still around) which we used too.