For a professional product, my preference would be build as a standalone app, package as a Docker image, bake an AMI and deploy to a vps in an auto scaling group with elb in front, or if in a large org with platform team, kubernetes. If you need other AWS services like SQS fine, it’s only deployment/packaging we’re talking about<p>Development/deployments is so much simpler and for a business with money, the price difference is negligible. You can dev/test locally, not to tied to a provider, essentially just another boring web app.<p>However for personal projects I’ve been playing with Sererless out of interest to see if it’s ready yet, and instead of paying $10-20 a month for a VPS I pay fractions of a cent.<p>I develop my Lambda as a monolith application, not a lambda per endpoint. I’m told this is an anti pattern…my take is I’m just using lambda as another compute deployment target it’s fine. I use hexagonal architecture so my app knows nothing about Lambda which makes unit testing easy.<p>Next I wire up a very thin adapter layer that takes the Lambda request json and converts it to the required value my app needs for routing. This is at the very edge of the app. I like to use this design regardless of lambda, I can swap out any web framework easily, even build a cli frontend for testing with minimal effort. In the context of lambda, using hexagonal architecture means I can bin Lambda, replace it with a standard web framework and deploy as a standalone app with minimal effort if I need to.<p>With the lambda in place I have a Cloudflare worker as the entry point to the lambda. It takes a request and forwards it to my lambda. I use a Cloudflare worker as it’s cheap/free (generous free tier) and I get a cache at the edge. I’ll use Cloudflare pages or s3 with Cloudflare in front for static assets.<p>I use Lambda for the app instead of Cloudflare workers simply because I want to interact with DynamoDB/S3 and I can manage permissions better inside AWS with IAM. I also want to use Rust which has very fast Lambda execution times and I had a few issues with Cloudflare workers wasm which I lost interest in figuring out as only experimenting. As I’m fronting with Cloudflare I’m also extremely dogmatic on cache headers from the lambda and propagating them to reduce calls to the origin/lambda.<p>The end result is reasonable performant. It’s fast but not the fastest as expected with the hops/latency, it’s extremely cheap. A small pet project may be single digit cents if even that. It’ll also handle large volumes of traffic, easily, without worrying about provisioning issues.<p>However, I have to jump through to many hoops to get what I have, more than what I’d like to do on a professional project. The orchestration is complex and it feels like what I save in $$$ I pay for in slower dev time jumping through hoops to get the absolute lowest cost. I enjoy this stuff and it’s a personal project done for education, still, I’d be hesitant to go this way for a real payed job as interesting as I find it.<p>Also pay as you go is great when it’s costing fractions of a cent, it’s also terrible in it opens you up to a new attack vector, DoS’ing your service which has unbounded pay as you go services then waking up to very large bills. Always build in rate limiting for services you use with on demand pricing.