I recently had the experience of setting up some Prefect pipelines, which I can compare to this article. Note that while I'm not new to data engineering, I'm new to open source frameworks, and have some insight into Airflow (studied architecture in depth, written a lot of code in it).<p>Prefect is generally very easy to use. Essentially, you: (a) write a Python-based flow, which defines some job to run (with subtasks), (b) turn on an orchestrator on a server somewhere, (c) turn on an agent on a server somewhere (to run the flow when instructed by the orchestrator), and (d) connect to the orchestrator, build & apply a deployment, and run it.<p>I find the docs a little half baked right now. One example is that cron jobs, which one would think are essential to something like Prefect, basically can't be done (as of a month ago) without touching the Prefect UI. This is extremely odd.<p>I also found it fairly confusing which components were supposed to be checked into source control, and which weren't. I blame this on Python deployment generally being very odd and confusing, but Prefect docs don't make it any more clear. Prefect assumes that there's an S3-like storage that both the submitting computer (my laptop) and the orchestrator (the server) can access.<p>Overall I find it quite handy, and probably won't switch. It feels more lightweight than say using full Docker containers, which we probably don't need right now. The UI is nicer than Airflow's, and the orchestrator & agent are much easier on resources. It feels more reproducible. I haven't tried Prefect Cloud, and we're unlikely to (security & cost are the main reasons).