Nice to see some workflow engine action on Hacker News! :-)<p>I'm currently building pgflow, which is a simple, postgres-first engine that uses task queues to perform real work.<p>Have explicit DAG approach, strong typesafety, nice DSL in TypeScript and a dedicated task queue worker that allows it to run solely on Supabase without any external tools.<p>I'm super close to the alpha release, if you guys want more info, check out the readme for SQL core (<a href="https://github.com/pgflow-dev/pgflow/tree/main/pkgs/core#readme" rel="nofollow">https://github.com/pgflow-dev/pgflow/tree/main/pkgs/core#rea...</a>) or my Twitter (<a href="https://x.com/pgflow_dev" rel="nofollow">https://x.com/pgflow_dev</a>).<p>Hope that grabs someone attention :-)
Cheers
this is really cool!<p>That said, my impression is that Airflow is a really dated choice for a greenfield project. There isn't a clear successor though. I looked into this recently, and was quickly overwhelmed by Prefect, Dagster, Temporal, and even newer ones like Hatchet and Hamilton<p>Most of these frameworks now have docs / plugins / sister libraries geared around AI agents<p>It would be really helpful to read a good technical blog doing a landscape of design patterns in these different approaches, and thoughts on how to fit things together well into a pipeline given various quirks of LLMs (e.g. nondeterminism).<p>This page is a good start, even if it is written as an airflow-specific how-to!
Truthfully have been a little skeptical of how many workloads will actually need “agents” vs doing something totally deterministic with a little LLM augmentation. Seems like I’m not the only one that thinks the latter works a lot of the time!
Extremely bearish on existing tools solving agentic workflows well. If anyone, it will be temporal. Airflow and the like simply were not designed for high dynamic execution, and so have all sorts of annoyances that will make them lose.
I'm sorry, I don't really know Airflow, but what's the point of `@task.agent`, as compared to plain old `return my_agent.run_sync(...)`? To me it feels like a more restrictive[1], and possibly less intuitive[2] API.<p>[1]: Limited to what decorator arguments can do. I suspect it could become an issue with `@task.branch` if some post-processing would be needed to adjust for smaller models' finickinesses.<p>[2]: As the final step is described at the top of the function.
I'm looking into using LLM calls inside SQL Triggers to make agents / 'agentic' workflows. Having LLM powered workflows can get you powerful results and are basically the equivalent of 'spinning up' an agent.
Decorators in the usage example looks useless, and more to show off than being a real convenience.<p>In real life program, I don't think that you will have hundreds of calls to LLM or agent in your app so much that you have any code gains to decorator but at the opposite the decorator will make it very hard to have parametric values or values not hard coded but from config that you don't set up upfront at application startup like globals. That is a bad practice...
This is about workflows that use AI, but it lead me to actually think of the inverse - has anyone experimented with AI agents defining and iterating upon long-running workflows?