For clarity, there is a significant warm up time for the DB to load your data when you cold-start, so this is sadly not suitable for use with serverless lambda functions in a web request cycle.<p>A good use case for this would be batch processing on demand e.g. a background job that's infrequently run when records are uploaded somehow. You save the cost of keeping a big DB running in between jobs and the start up time doesn't matter.
AWS RDS for PostgresSQL supports many popular extensions [1]. I wonder if these are activated here or will be in time. One of pg's unfair advantage is its ecosystem.<p>I have user-triggered batch processing jobs that need PostGIS; Aurora Serverless for pg would be a great fit.<p>[1] <a href="https://docs.aws.amazon.com/fr_fr/AmazonRDS/latest/UserGuide/CHAP_PostgreSQL.html#PostgreSQL.Concepts.General.FeatureSupport.Extensions" rel="nofollow">https://docs.aws.amazon.com/fr_fr/AmazonRDS/latest/UserGuide...</a>
Although I know AWS is not optimizing for this use case, I see a lot of potential with serverless databases for things like open source, home brew style applications where people want to own and control their data but don't want to pay for a monthly service.<p>IoT devices could communicate asynchronously and store important data using just llamdba functions and serverless db instances.<p>The first thought that came to mind was something like the bitwarden open source server. Currently, you can run a nano instance in the cloud that stays on forever but it needs to be maintained and it might go down occasionally. Additionally, it will cost you a few bucks a month assuming you need more than the single nano allowed by the free tier.<p>With serverless functions and a serverless database you could have a central store without needing to run a full server. Cold startup wouldn't be an issue since you only need to sync to the server occasionally.
This is on HN like just now <a href="https://news.ycombinator.com/item?id=20398353" rel="nofollow">https://news.ycombinator.com/item?id=20398353</a>
Aurora with Postgres is amazing. I don't know if I want to go back to Redshift after using it. Pair it with a nice instance of PGADMIN4 and enjoy.
I've cut costs by an order of magnitude by changing back to old school dedicated servers. ( using the same data center that aws/google uses, equinix )
IOPS on aws is ridiculous expensive
How well would this work with something like airflow ? If I have jobs that run only end of day for an hour on airflow, I pay just for an hour of the database ?