There's a lot of harsh commentary, but I think people here are missing the point. The fact that so much software needs root access to a (often mutable) global environment in order to properly build is a <i>bug</i>.<p>There are an increasing number of build systems that encourage squashing these bugs. The resulting build outputs are simpler and are often more portable. They're also easier to reason about. That translates to simpler deployment, simpler operations, and fewer edge cases to debug.<p>IMHO, the most promising answer to the 5 minute limit is finer granularity and better caching of dependency inputs.
Cool! I had an idea for something like this but instead of having each build be it's own lambda event, I wanted to make each individual test it's own lambda event. The goal is to have the build time for a complex project boil down to the time it takes to setup + run the longest test.
Cute.<p>> No root access
> 5 min max build time
> Bring-your-own-binaries – Lambda has a limited selection of installed software
> 1.5GB max memory
> Linux only<p>There's a reason Jenkins is still used so widely. It's not because of utilization or all the other things pointed out. When your project gets big enough, managing the CI pipeline turns into a distributed systems problem with distributed queues, locks, error/failure recovery, and all the other headaches that such systems bring. Heck, reporting alone on a test suite with 12k tests is a problem in and of itself.
This looks very cool, but the 5 minute build time limit (an inherent limitation of the Lambda service) makes this less than ideal for a build system. The author does address this by recommending that you use Docker containers on ECS as an alternative for long running builds.
Hardware is usually the cheapest component when it comes to software manufacturing, but we found out we wanted CI/CD to spin 24x365 as much as it can, increasing the resolution to ~single commits with the shortest possible cycles. With a sizeable codebase and a thorough testsuite AWS bills went up so quickly even the proponents decided it was not worth it. We restored our old CI infra and were able to add a couple of new servers too. The throughput increased considerably with money to spare. Still an interesting experiment, but it showed burning money on Amazon not defaults to moving faster.
Very nice!<p>This looks very close to the ideal CI infrastructure. I'm used to waiting on queues and long VM or container boots and configuration on other services.<p>We can almost certainly count on Lambda getting longer execution times and higher memory limits. We can also count on containerization solving the root problem.<p>We should also be building software with the goal of tests that run within reasonable limits like this.<p>`time make test` takes 39 seconds on my businesses Go projects. I'd consider a 5m test suite serious tech debt. The time that developers wait for feedback on tests and deployment is becoming a business bottleneck in the continuous delivery age.
How is it serverless if it runs on an Amazon server? Also, how is it serverless if you need to consume a service? (AWS in this case)...<p>Every time I see something nice, there is this increasing chance that I'm gonna end up sad because it requires some sort of external provider like AWS, DO, Heroku, GCE... I don't have any of those and I don't want any of them.