I've not found the time to write up the entirety of my experience unfortunately, but I did move a bunch of stuff off Heroku over the past couple of years and directly onto AWS. It was a very piecemeal approach which had the double benefit of being low/no impact to end users while also letting me do it at my leisure. My general approach was:<p>* Import my current Heroku config into Terraform resources so I can co-ordinate changes across multiple platforms as a single atomic change.
* Embrace a strangler pattern (<a href="https://www.redhat.com/architect/pros-and-cons-strangler-architecture-pattern" rel="nofollow">https://www.redhat.com/architect/pros-and-cons-strangler-arc...</a>). I used Cloudfront, but you could put any CDN in front.<p>* My databases + workers were a large part of my Heroku bill, and I had a very spikey usage profile (potentially days with near zero usage, with brief peaks), so I used it as an opportunity to refactor towards a serverless infrastructure (<a href="https://www.redhat.com/architect/pros-and-cons-strangler-architecture-pattern" rel="nofollow">https://www.redhat.com/architect/pros-and-cons-strangler-arc...</a>). This was entirely superfluous to the migration though. If I'd not taken that approach the alternate would have been to provision and RDS Postgres instance, add the required IAM profiles to my Heroku app. Work out how/when to schedule a window to cutover to RDS being the primary DB. Update the DATABASE_URL accordingly. Again, doing all of this via Terraform to make it happen. But doing it in small incremental steps where possible (i.e., adding the IAM profiles to the app first). Once cut-over, take a final snapshot of the Heroku Postgres database and then shut it down.<p>* Updating the code on my workers to be idempotent.<p>* Make sure config vars are imported to Terraform and are sync'd to the various places they need to be (probably just the Heroku app for now).<p>* Have the workers run inside containers on AWS (doing them just one worker at a time), exposing the required config vars for them to work. Let the Heroku + AWS workers both process the work for a period of time, hence the need for being idempotent. Once I'm confident the AWS ones work as intended, shut down the Heroku workers.<p>* Picking off individual paths/API endpoints to serve from AWS. In my case I also migrated all of this to API gateway + lambda. An ALB with EC2/ECS would have also been an alternative. Add a new path based route to your CDN (e.g., /v2/the-existing-path) and have it's origin point to your non-Heroku service. Test it. Once it works, update the existing path that users are using to now go to the new origin. It means if you discover some issue you can quickly update the routing to have Heroku resume serving that route. Once you're confident, rinse and repeat the next path. Continue through until all traffic is ultimately served by the new host.<p>* If there's nothing left then scale down the remaining processes on Heroku.<p>I've gone an all-in AWS approach, but the same general principle could apply to whatever platform you want to run on. I think the biggest thing people I've spoken to in the past about this overlook is that you don't have to make some big wholesale switch. There's ways to derisk it and take an incremental approach to migrating. Which also drastically reduces the cost of making the wrong decision. If you can run just one route through AWS/Fly/DigitialOcean/whatever then you can get a sense for whether it will _actually_ work for your needs, and quickly roll back if you change your mind.