I had a pretty terrible experience doing devops to automate the setup of an Airflow setup in 2020. This was before 2.0; I assume a lot of the bugs and issues may have been at least partially addressed.<p>My main gripes:<p>- The out of the box configuration is not something you should use in production. It's basically using python multiprocess (yikes) and sqlite like you would on a developer machine. Instead, you'll be using dedicated workers running on different machines and either a database or redis in between.<p>- Basically the problem is that python is single threaded (the infamous gil) and has synchronous (IO). And that kind of sucks when you are building something that ought to be asynchronous and running on multiple threads, cores, cpus, and machines. It's not a great language for that kind of job. Mostly in production it acts as a facade for stuff that is much better at such things (kubernetes, yarn, etc.).<p>- Most of the documentation is intended for people doing stuff on their laptops, not for people trying to actually run this in a responsible way on actual servers. In our case that meant referring to third party git repositories with misc terraform, aws, etc. setup to figure out what configuration was needed to run it in a more responsible way.<p>- Python developers don't seem to grasp the notion that installing a lot of python dependencies on a production server is not a very desirable thing. Doing that sucks, to put it mildly. Virtual environments help. Either way, that complicates deployment of new dags to production. That severely limits what you should be packaging up as a dag and what you should be packaging up with e.g. docker.<p>- What that really means is that you should be considering packaging up most of your jobs using e.g. Docker. Airflow has a docker runner and a kubernetes runner. I found using that to be a bit buggy but we managed to patch our way around it.<p>- Speaking of docker, at the time there was no well supported dockerized setup for Airflow. We found multiple unsupported bits of configuration for kubernetes by third parties though. That stuff looked complicated. I quickly checked and at least they now provide a docker-compose for a setup with postgresql and redis; so that's an improvement.<p>- The UI was actually worse than jenkins and that's a bit dated to say the least. Very web 1.0. I found my self hitting F5 a lot to make it stop lying about the state of my dags. At least Jenkins had auto reload. I assume somebody might have fixed that by now but the whole thing was pretty awful in terms of UX.<p>- Actual dag programming and testing was a PITA as well. And since it is python, you really do need to unit test dags before you deploy them and have them run against your production data. A small typo can really ruin your day.<p>We got it working in the end but it was a lot of work. I could have gotten our jobs running with jenkins in under a day easily.